61
UNIT – III File Filters: Basic understanding about uniq, grep, cut, paste, join, tr, df, du, who, w, rm, unlink, ulimit, chmod, umask, chown, chgrp, id, diff, sed, cmp, comm, Introduction to pipes, Backup Commands: tar, cpio, zip and unzip commands, mount and umount File Filters: Linux distributions come with various powerful file filtering commands. You can get fast results just with the help of some simple commands. Different file filter commands used in Linux are as follows: 1) wc: The wc (word count) command in Unix/Linux operating systems is used to find out number of newline count, word count, byte and characters count in a files specified by the file arguments. The syntax of wc command as shown below. The following are the options and usage provided by the command. So, let’s see how we can use the ‘wc‘ command with their few available arguments and examples in this article. We have used the ‘tecmint.txt‘ file for testing the commands. Let’s find out the output of the file using cat command as shown below. NBKRIST III B.TECH CSE-A PREPARED BY: BSR Page 1

showriblog.files.wordpress.com · Web viewA filter, such as the UNIX cut (Linux command cut is used for text processing. You can use this command to extract portion of text from a

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

UNIT – III

File Filters: Basic understanding about uniq, grep, cut, paste, join, tr, df, du, who, w, rm, unlink, ulimit, chmod, umask, chown, chgrp, id, diff, sed, cmp, comm, Introduction to pipes, Backup Commands: tar, cpio, zip and unzip commands, mount and umount

File Filters:

Linux distributions come with various powerful file filtering commands. You can get fast results just with the help of some simple commands.

Different file filter commands used in Linux are as follows:

1) wc:

The wc (word count) command in Unix/Linux operating systems is used to find out number of newline count, word count, byte and characters count in a files specified by the file arguments. The syntax of wc command as shown below.

The following are the options and usage provided by the command.

So, let’s see how we can use the ‘wc‘ command with their few available arguments and examples in this article. We have used the ‘tecmint.txt‘ file for testing the commands. Let’s find out the output of the file using cat command as shown below.

1. A Basic Example of WC Command

The ‘wc‘command without passing any parameter will display a basic result of ”tecmint.txt‘ file. The three numbers shown below are 12 (number of lines), 16 (number of words) and 112 (number of bytes) of the file.

2. Count Number of Lines

To count number of newlines in a file use the option ‘-l‘, which prints the number of lines from a given file.

3. Display Number of Words

Using ‘-w‘ argument with ‘wc‘ command prints the number of words in a file.

4. Count Number of Bytes and Characters

When using options ‘-c‘ and ‘-m‘ with ‘wc‘ command will print the total number of bytes and characters respectively in a file.

5. Display Length of Longest Line

The ‘wc‘command allow an argument ‘-L‘, it can be used to print out the length of longest (number of characters) line in a file.

2) Pipe ( | ) :

We can carry out piping between two commands. Here, the standard output of first command is taken as the standard input for the second command.

| e.g. $ls | wc

3) head :

Head is used to display the first parts of a file, it outputs the first 10 lines by default. You can use the -n num flag to specify the number of lines to be displayed:

To see the top 10 lines of a file - $head

To see the top 5 lines of a file - $head -5 [or]

Command: $ head -n 5 /var/log/auth.log

4) tail :

tail outputs the last parts (10 lines by default) of a file. Use the -n num switch to specify the number of lines to be displayed.

The command below will output the last 5 lines of the specified file:

To see last 10 lines of a file - $tail < file name>

To see last 20 lines of a file - $tail -20

5) more :

It shows file content in a page like format, where users can press [Enter] to view more information.

Syntax: $more

Example: $more f1.txt

6) sed : (Stream EDitor)

sed is a powerful stream editor for filtering and transforming text.

It is used to cut the information horizontally.

To see the first line in file sun,

$sed -n 1p sun

To see 3 to 5 lines

$sed -n '3,5p' sun

1. Replacing or substituting string

Sed command is mostly used to replace the text in a file. The below simple sed command replaces the word "unix" with "linux" in the file.

2. Replacing the nth occurrence of a pattern in a line.

3. Replacing all the occurrence of the pattern in a line.

4. Duplicating the replaced line with /p flag

The /p print flag prints the replaced line twice on the terminal. If a line does not have the search pattern and is not replaced, then the /p prints that line only once.

5. Running multiple sed commands.

You can run multiple sed commands by piping the output of one sed command as input to another sed command.

6. Replacing string on a specific line number.

7. Deleting lines.

You can delete the lines a file by specifying the line number

7) grep :

To search a pattern of word in a file, grep file is used.

Syntax: $grep < word name> < file name>

$grep hi file_1

To search multiple words in a file

$grep -E ' word1|word2|word3|'

e.g. $grep -E 'hi|beyond|good' file_1

Example

Let's say want to quickly locate the phrase "our products" in HTML files on your machine. Let's start by searching a single file. Here, our PATTERN is "our products" and our FILE is product-listing.html.

A single line was found containing our pattern, and grep outputs the entire matching line to the terminal.

1. Viewing grep output in color

If we use the --color option, our successful matches will be highlighted for us:

2. Viewing line numbers of successful matches

3. Performing case-insensitive grep searches

4. Searching multiple files using a wildcard

8) sort :

This command is used to sort the file .

Sort command is used to sort a file, arranging the records in a particular order. By default, the sort command sorts file assuming the contents are ascii. Using options in sort command, it can also be used to sort numerically.

$sort

Example: $sort file_1

To sort the files in reverse order

$sort -r

File with Ascii data:

Let us consider a file with the following contents:

1. Sort simply sorts the file in alphabetical order

2. sort removes the duplicates using the -u option

3. The default sort 'might' give incorrect result on a file containing numbers:

4. To sort a file numericallly:

5. sort file numerically in reverse order:

Multiple Files:

Let us consider examples with multiple files, say file1 and file2, containing numbers:

6. sort can sort multiple files as well.

7. Sort, merge and remove duplicates:

Basic understanding about uniq, grep, cut, paste, join, tr, df, du, who, w, rm, unlink, ulimit, chmod, umask, chown, chgrp, id, diff, sed, cmp, comm

1.uniq

Uniq command is helpful to remove or detect duplicate entries in a file.

The following test file is used in some of the example to understand how uniq command works.

$ cat test

aa

aa

bb

bb

bb

xx

1. Basic Usage

Syntax:

$ uniq [-options]

For example, when uniq command is run without any option, it removes duplicate lines and displays unique lines as shown below.

$ uniq test

aa

bb

xx

2. Count Number of Occurrences using -c option

This option is to count occurrence of lines in file.

$ uniq -c test

2 aa

3 bb

1 xx

3. Print only Duplicate Lines using -d option

This option is to print only duplicate repeated lines in file. As you see below, this didn’t display the line “xx”, as it is not duplicate in the test file.

$ uniq -d test

aa

bb

The above example displayed all the duplicate lines, but only once. But, this -D option will print all duplicate lines in file. For example, line “aa” was there twice in the test file, so the following uniq command displayed the line “aa” twice in this output.

$ uniq -D testaaaabbbbbb

4. Print only Unique Lines using -u option

This option is to print only unique lines in file.

$ uniq -u test

xx

5. Limit Comparison to ‘N’ characters using -w option

This option restricts comparison to first specified ‘N’ characters only. For this example, use the following test2 input file.

$ cat test2hi Linuxhi LinuxUhi LinuxUnixhi Unix

The following uniq command using option ‘w’ is compares the first 8 characters of lines in file, and then using ‘c’ option prints number of occurrences of lines of file.

$ uniq -c -w 8 testNew 3 hi Linux 1 hi Unix

The following uniq command using option ‘w’ is compares first 8 characters of lines in file, and then using ‘D’ option prints all duplicate lines of file.

$ uniq -D -w 8 testNewhi Linuxhi LinuxUhi LinuxUnix6. Avoid Comparing first ‘N’ Characters using -s option

This option skips comparison of first specified ‘N’ characters. For this example, use the following test3 input file.

$ cat test3aabbxxbbbbcbbd

The following uniq command using option ‘s’ skips comparing first 2 characters of lines in file, and then using ‘D’ option prints all duplicate lines of file.

Here, starting 2 characters i.e. ‘aa’ in 1st line and ‘’xx’ in 2nd line would not be compared and then next 2 characters ‘bb’ in both lines are same so would be shown as duplicated lines.

$ uniq -D -s 2 test3aabbxxbb7. Avoid Comparing first ‘N’ Fields using -f option

This option skips comparison of first specified ‘N’ fields of lines in file.

$ cat test2hi hello Linuxhi friend Linuxhi hello LinuxUnix

The following uniq command using option ‘f’ skips comparing first 2 fields of lines in file, and then using ‘D’ option prints all duplicate lines of file.

Here, starting 2 fields i.e. ‘hi hello’ in 1st line and ‘hi friend’ in 2nd line would not be compared and then next field ‘Linux’ in both lines are same so would be shown as duplicated lines.

$ uniq -D -f 2 test2hi hello Linuxhi friend Linux

2. grep

grep, which stands for "global regular expression print," processes text line by line and prints any lines which match a specified pattern.. By default, grep displays the matching lines. Use grep to search for lines of text that match one or many regular expressions, and outputs only the matching lines. grep is considered as one of the most useful commands on Linux and Unix-like operating systems.

grep syntax

grep [options] pattern [ file…]

Overview

Grep is a powerful tool for matching a regular expression against text in a file, multiple files, or a stream of input. It searches for the PATTERN of text that you specify on the command line, and outputs the results for you.

Example Usage

Let's say want to quickly locate the phrase "our products" in HTML files on your machine. Let's start by searching a single file. Here, our PATTERN is "our products" and our FILE is product-listing.html.

A single line was found containing our pattern, and grep outputs the entire matching line to the terminal. The line is longer than our terminal width so the text wraps around to the following lines, but this output corresponds to exactly one line in our FILE.

Viewing grep output in color

If we use the --color option, our successful matches will be highlighted for us:

Viewing line numbers of successful matches

It will be even more useful if we know where the matching line appears in our file. If we specify the -n option, grep will prefix each matching line with the line number:

Our matching line is prefixed with "18:" which tells us this corresponds to line 18 in our file.

Performing case-insensitive grep searches

What if "our products" appears at the beginning of a sentence, or appears in all uppercase? We can specify the -i option to perform a case-insensitive match:

Using the -i option, grep finds a match on line 23 as well.

Searching multiple files using a wildcard

If we have multiple files to search, we can search them all using a wildcard in our FILE name. Instead of specifying product-listing.html, we can use an asterisk (“*”) and the .html extension. When the command is executed, the shell will expand the asterisk to the name of any file it finds (within the current directory) which ends in “.html”.

Notice that each line starts with the specific file where that match occurs.

3.cut

A filter, such as the UNIX cut (Linux command cut is used for text processing. You can use this command to extract portion of text from a file by selecting columns.) command, is a program that processes an input stream of data to produce an output stream of data. The input data may be fed into the program's standard input or read from a file, and the output data may be sent to standard output or sent to a file.  The UNIX cut command is used to extract a vertical selection of columns (character position) or fields from one or more files. The syntax for extracting a selection based on a column number:Syntax:

$ cut -c n [filename(s)]

A. Extracting Column of Characters

To begin with, consider a file cuttest.txt with contents as below:

$ cat cuttest.txt

This is line #1

It is line #2

That is line #3

While, this is line #4

It's line #5

I am line #6

Myself line #7

It's me, line #8

Hello, I am line #9

Last line, line #10

Now, just have a look at the basic syntax of the cut command, to extract column(s) of characters from a file:

cut -c [RANGE] [FILENAME]

To explain this briefly, we are instructing cut command to select on the specific characters specified by RANGE from the file FILENAME.

1. Display a Column of Characters

To begin with, lets display the fourth character from each line of the filecuttest.txt.Example:

$ cat -c 4 cuttest.txt

s

i

t

l

s

m

e

s

l

t

2. Display a Group of Columns of Characters

In order to extract a group of columns, we need to specify a range - Start and End, to the cut command. To try with, lets display first five characters of each line of the file.

Example:

$ cut -c 1-5 cuttest.txt

This

It is

That

While

It's

I am

Mysel

It's

Hello

Last

Conclusion is - a whitespace is also considered as a character.

Another variant of this case is, when you want to start from a particular column and display till the last one. As an example, we will start displaying from the 6th column will the end. So, in this case, we would mention start of the range as '6' and we do not mention any end. Thus, it will print everything after the 6th column.

Example:

$ cut -c 6- cuttest.txt

is line #1

line #2

is line #3

, this is line #4

line #5

line #6

f line #7

me, line #8

, I am line #9

line, line #10

Similarly, to get first 6 characters from the beginning of each line, we would have an example as follows:

$ cut -c -6 cuttest.txt

This i

It is

That i

While,

It's l

I am l

Myself

It's m

Hello,

Last l

Now, there might be a curiosity that, what if I don't mention the start and the end of the range. Let's see what happens-Example:

$ cut -c - cuttest.txt

cut: invalid range with no endpoint: -

Those who thought that entire columns will be printed, are proved to be wrong. Conclusion is - There has to be a valid range.

B. Extracting Field from a File

In order to understand this usage of cut command, lets consider a csv file as follows:

$ cat employees.txt

Employee ID, Employee Name, Age, Gender, Department, Salary

101, John Davies, 35, M, Finance, $4000

102, Mary Fernandes, 29, F, Human Resources, $3000

103, Jacob Williams, 40, M, Sales, $4700

104, Sean Anderson, 25, M, Production, $2700

105, Nick Jones, 42, M, Finance, $7500

106, Diana Richardson, 29, F, Finance, $3200

Remember, in order to extract a field from a file, we would need a delimiter (i.e. a column separator), based on which the file will be divided into columns and we can extract any of them. In this case, the syntax would be-

cut -d [DELIMITER] -f [RANGE] [FILENAME]

Here, we are instructing cut command to use a particular delimiter with option -dand then extract certain fields using option -f.

1. Display a specific field from a file

In case of a csv file, it is crystal clear that our delimiter will be a comma (,). Now, we need to enlist the names of the employees working in our organization, i.e. field number 2.

Example:

$ cut -d ',' -f 2 employees.txt

Employee Name

John Davies

Mary Fernandes

Jacob Williams

Sean Anderson

Nick Jones

Diana Richardson

Looks good.2. Displaying Multiple Fields from a File

Moving forward now, lets display more than one field now. Suppose, we need to include 'Age' and 'Gender' fields also. For this, we must specify the range - again, a start and an end.

$ cut -d ',' -f 2-4 employees.txt

Employee Name, Age, Gender

John Davies, 35, M

Mary Fernandes, 29, F

Jacob Williams, 40, M

Sean Anderson, 25, M

Nick Jones, 42, M

Diana Richardson, 29, F

Conclusion, in this case, is that, Input Delimiter = Output Delimiter.Lets have a look at a variant in this case. Suppose, we need to extract 'Employee ID', 'Employee Name', 'Department' and 'Salary'. In that case, we need to specify two ranges as below:Example:

$ cut -d ',' -f 1-2,5-6 employees.txt

Employee ID, Employee Name, Department, Salary

101, John Davies, Finance, $4000

102, Mary Fernandes, Human Resources, $3000

103, Jacob Williams, Sales, $4700

104, Sean Anderson, Production, $2700

105, Nick Jones, Finance, $7500

106, Diana Richardson, Finance, $3200

This is just awesome!3. Change the Delimiter in the Output

As we just saw in one of the examples above, by default, Input Delimiter = Output Delimiter. What if I wish to change the output delimiter? Just have a look at the example below:Example:

$ cut -d ',' -f 2-4 --output-delimiter='|' employees.txt

Employee Name| Age| Gender

John Davies| 35| M

Mary Fernandes| 29| F

Jacob Williams| 40| M

Sean Anderson| 25| M

Nick Jones| 42| M

Diana Richardson| 29| F

4. Do not Display Certain Columns

Just like above example, if we use --complement as an option, cut command will display all the fields, but the specified field.

Example:

$ cut -d ',' --complement -f 6 employees.txt

Employee ID, Employee Name, Age, Gender, Department

101, John Davies, 35, M, Finance

102, Mary Fernandes, 29, F, Human Resources

103, Jacob Williams, 40, M, Sales

104, Sean Anderson, 25, M, Production

105, Nick Jones, 42, M, Finance

106, Diana Richardson, 29, F, Finance

4. paste

Paste command is one of the useful commands in UNIX or Linux operating system. The paste command merges the lines from multiple files. The paste command sequentially writes the corresponding lines from each file separated by a TAB delimiter on the UNIX terminal. The syntax of the paste command isSyntax:

paste [options] files-list

1. paste command examples for single file handling

2. paste command examples for multiple files handling

Let us consider a file with the sample contents as below:$ cat file1

Linux

Unix

Solaris

HPUX

AIX

paste command with a single file: 1. paste command without any options is as good as the cat command when operated on a single file. 

$ paste file1

Linux

Unix

Solaris

HPUX

AIX

2. Join all lines in a file: 

$ paste -s file1

Linux Unix Solaris HPUX AIX

-s option of paste joins all the lines in a file. Since no delimiter is specified, default delimiter tab is used to separate the columns.3. Join all lines using the comma delimiter: 

$ paste –d’,’ -s file1

Linux, Unix, Solaris, HPUX, AIX

-d option is used to specify the delimiter. Using this -d and -s combination, all the lines in the file merged into a single line. 4. Merge a file by pasting the data into 2 columns: 

$ paste - - < file1

Linux Unix

Solaris HPUX

AIX

The '-' reads a line from the standard input. Two '-' reads 2 lines and pastes them side by side.5.Merge a file by pasting the data into 2 columns using a colon separator: 

$ paste -d':' - - < file1

Linux:Unix

Solaris:HPUX

AIX:

       This is same as  joining every 2 lines in a file. 6. Merge a file by pasting the file contents into 3 columns: 

$ paste - - - < file1

Linux Unix Solaris

HPUX AIX

7. Merge a file into 3 columns using 2 different delimiters: 

$ paste -d ':,' - - - < file1

Linux:Unix,Solaris

HPUX:AIX,

The -d option can take multiple de-limiters. The 1st and 2nd columns is separated by ':', whereas the 2nd and 3rd are separated by a ','

paste command with multiple files:  Let us consider a file, file2, with the following contents: 

$ cat file2

Suse

Fedora

CentOS

OEL

Ubuntu

8.paste contents of 2 files side by side. 

$ paste file1 file2

Linux Suse

Unix Fedora

Solaris CentOS

HPUX OEL

AIX Ubuntu

paste command is used in scenarios to merge multiple files side by side. As shown above, the file contents are pasted side by side. 9. paste contents of 2 files side by side with a comma separator: 

$ paste –d’,’ file1 file2

Linux,Suse

Unix,Fedora

Solaris,CentOS

HPUX,OEL

AIX,Ubuntu

10. paste command can take standard input in case of multiple files too: 

$ cat file2 | paste -d, file1 -

Linux,Suse

Unix,Fedora

Solaris,CentOS

HPUX,OEL

AIX,Ubuntu

Like this as well: 

$ cat file1 | paste -d, - file2

Linux,Suse

Unix,Fedora

Solaris,CentOS

HPUX,OEL

AIX,Ubuntu

11. Read lines in both the files alternatively: 

$ paste -d'\n' file1 file2

Linux

Suse

Unix

Fedora

Solaris

CentOS

HPUX

OEL

AIX

Ubuntu

Using the newline character as the delimiter, we can read 2 files line by line alternatively.

5. join

About join

Joins the lines of two files which share a common field of data.

join syntaxjoin [OPTION]... FILE1 FILE2join examples

If we have a file, myfile1.txt, whose contents are:

1 India 2 US 3 Ireland 4 UK 5 Canada

...and another file, myfile2.txt, whose contents are:

1 NewDelhi 2 Washington 3 Dublin 4 London 5 Toronto

The common fields are the fields which begin with the same number. We can join the contents using the following command:

join myfile1.txt myfile2.txt

...which outputs the following to standard output:

1 India NewDelhi 2 US Washington 3 Ireland Dublin 4 UK London 5 Canada Toronto

If we wanted to create a new file with the joined contents, we could use the following command:

join myfile1.txt myfile2.txt > myjoinedfile.txt

...which directs the output into a new file called myjoinedfile.txt, containing the same output as the example above.

6.tr

tr is an UNIX utility for translating, or deleting, or squeezing repeated characters. It will read from STDIN and write to STDOUT.

Tr stands for translate.Syntax

The syntax of tr command is:

$ tr [OPTION] SET1 [SET2]

1.Convert lower case to upper case

The following tr command is used to convert the lower case to upper case

$ tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ

thegeekstuff

THEGEEKSTUFF

The following command will also convert lower case to upper case

$ tr [:lower:] [:upper:]

thegeekstuff

THEGEEKSTUFF

You can also use ranges in tr. The following command uses ranges to convert lower to upper case.

$ tr a-z A-Z

thegeekstuff

THEGEEKSTUFF

7.df(Disk file)

The df command reports the amount of available disk space being used by file systems.

syntaxdf [OPTION]... [FILE]...

A sample output from df command is as follows:

[root@tecmint ~]# df

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/cciss/c0d0p2 78361192 23185840 51130588 32% /

/dev/cciss/c0d0p5 24797380 22273432 1243972 95% /home

/dev/cciss/c0d0p3 29753588 25503792 2713984 91% /data

/dev/cciss/c0d0p1 295561 21531 258770 8% /boot

tmpfs 257476 0 257476 0% /dev/shm

So we see that df gives some valuable information on the file systems, their mount points, their memory usage etc.

1. Display Information of all the File Systems

If the disk usage of all the file systems is required then use ‘-a’ option:

[root@tecmint ~]# df -aFilesystem 1K-blocks Used Available Use% Mounted on/dev/cciss/c0d0p2 78361192 23186116 51130312 32% /proc 0 0 0 - /procsysfs 0 0 0 - /sysdevpts 0 0 0 - /dev/pts/dev/cciss/c0d0p5 24797380 22273432 1243972 95% /home/dev/cciss/c0d0p3 29753588 25503792 2713984 91% /data/dev/cciss/c0d0p1 295561 21531 258770 8% /boottmpfs 257476 0 257476 0% /dev/shmnone 0 0 0 - /proc/sys/fs/binfmt_miscsunrpc 0 0 0 - /var/lib/nfs/rpc_pipefs

So we see that in the output, details of all the file systems and their memory usage is there.

2. Specify the Memory Block Size

If you see the output in point 1 above, the second column gives the memory of file system in memory blocks of 1k. df command provides an option through which we can change the size of memory block in the output. Use option -B for this:

$ df -B 100

Filesystem 100B-blocks Used Available Use% Mounted on

/dev/sda1 1354135307 63599535 1221749720 5% /

tmpfs 41184011 0 41184011 0% /dev/shm

/dev/sdb2 317128704 1205658 299813848 1% /home/oracle

/dev/sdc1 5901416244 729416 5600912425 1% /home/data

So you see that we specified a block size of 100 and in the output (second column) block size of 100 is displayed.

3. Print Human Readable Sizes

We are used to reading the memory in terms of gigabytes, megabytes, etc as its easy to read and remember. df command also provides an option ‘-h’ to print the memory statistics in human readable format.

Option -h stands for “human” readable format. As shown in the output below, G is used for gigabytes and M is used for megabytes.

$ df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda1 127G 6.0G 114G 5% /

tmpfs 3.9G 0 3.9G 0% /dev/shm

/dev/sdb2 30G 115M 28G 1% /home/oracle

/dev/sdc1 550G 70M 522G 1% /home/data

4. Display Grand Total in the Output

Till now we have seen that only disk usage statistics of individual file systems is produced. If we want to display a grand total of every column then we can use the ‘–total’ flag. Here is an example:

$ df -h --totalFilesystem Size Used Avail Use% Mounted on/dev/sda1 127G 6.0G 114G 5% /tmpfs 3.9G 0 3.9G 0% /dev/shm/dev/sdb2 30G 115M 28G 1% /home/oracle/dev/sdc1 550G 70M 522G 1% /home/datatotal 710G 6.2G 668G 1%

So we see that a new row ‘total’ at the end of the output was produced.

5. List Inodes (Instead of Block Usage)

Till now we have seen that df prints the second column as total memory blocks. If information in terms of inode is desired then df provides an option ‘-i’ for this.

$ df -iFilesystem Inodes IUsed IFree IUse% Mounted on/dev/sda1 8396800 65397 8331403 1% /tmpfs 1005469 1 1005468 1% /dev/shm/dev/sdb2 1966560 2517 1964043 1% /home/oracle/dev/sdc1 36593664 11 36593653 1% /home/data

So we see that information in terms of inodes is displayed.

6. Print File System Type

If you wish to print the type of file system in the output, use option ‘-T’.

$ df -TFilesystem Type 1K-blocks Used Available Use% Mounted on/dev/sda1 ext4 132239776 6210892 119311496 5% /tmpfs tmpfs 4021876 0 4021876 0% /dev/shm/dev/sdb2 ext2 30969600 117740 29278696 1% /home/oracle/dev/sdc1 ext2 576310180 71232 546964104 1% /home/data

In the above output, we can see all the file systems along with their type is displayed. Apart from using df -T to identify file system, 

7. Include/Exclude Certain File System Type

You can also display file systems that belongs to certain type. For example, the following command displays only ext2 file systems. types.

$ df -t ext2Filesystem 1K-blocks Used Available Use% Mounted on/dev/sdb2 30969600 117740 29278696 1% /home/oracle/dev/sdc1 576310180 71232 546964104 1% /home/data

You can also display file systems that doesn’t belongs to certain type. For example, the following command displays all other file systems except ext2. This is exactly opposite to the above -t option.

$ df -x ext2Filesystem 1K-blocks Used Available Use% Mounted on/dev/sda1 132239776 6210896 119311492 5% /tmpfs 4021876 0 4021876 0% /dev/shm

8. du (Disk Usage)

Linux du command is used for summarizing the disk usage in terms of file size. It can be used with folders to get the total disk usage.

All the du examples shown here are executed on a directory containing the following contents:

$ ls

Linux Kernel redhat testfile.txt ubuntu

1. A basic example

$ du -a

0./redhat/rh7

4./redhat

4./testfile.txt

0./linuxKernel

0./ubuntu/ub10

4./ubuntu

16.

I have used the -a flag in the example above to show the disk usage of all the files and directories. Its because if -a is not used then only directories that are occupying some disk are listed. For example :

$ du

4./redhat

4./ubuntu

16.

So, now we get a basic idea about how to use du command but as with me, anyone would find it hard to understand what those numbers in the output mean??Lets move on to next examples and the clouds will clear off.

2. Display output in human readable form using -h

$ du -ah

0./redhat/rh7

4.0K./redhat

4.0K./testfile.txt

0./linuxKernel

0./ubuntu/ub10

4.0K./ubuntu

16K.

So we see that in the above example, I used the -h flag along with the -a flag. The -h flag is used to get the output in the human readable format. As you can see that above output is more easy to understand as disk usage is listed in terms of ‘K’.

3. Display grand total in the output using -c

The example that I am using has a small directory structure. One could easily calculate the total disk usage of the directory by calculating manually. But, in real time scenario manual calculation is not practical. So, there exists a flag through which one can get the total usage in the output.

$ du -ahc

0./redhat/rh7

4.0K./redhat

4.0K./testfile.txt

0./linuxKernel

0./ubuntu/ub10

4.0K./ubuntu

16K.

16Ktotal

So we see that through the -c flag, one can get the total usage in the output.

du examplesdu -s *.txt

Reports the size of each file in the current directory with the  .txt. Below is an example of the output:

8 file1.txt8 file2.txt10 file3.txt2 file4.txt8 file5.txt8 file6.txtdu -shc *.txt

Display the same data, but in a "human-readable" size format, and display a grand total.

8.0K file1.txt8.0K file2.txt10.0K file3.txt2.0K file4.txt8.0K file5.txt8.0K file6.txt44.0K total

9.whoAbout who:Displays who is logged on to the system.Description:The who command prints information about all users who are currently logged in.

who examples

who

Displays the username, line, and time of all currently logged-in sessions. For example:

who am i

Displays the same information, but only for the terminal session where the command was issued, for example:

alan pts/3 2013-12-25 08:52 (:0.0)

To display line of column headings pass the -H option:

$ who –H

List user logged in

$ who –u

To display line of column headings and list users logged in

$ who –H -u

To show user’s message status as +, – or ?, enter:$ who -TShow dead processes on the system

$ who –d

$ who –d – H

To count all login names and number of users logged on:$ who –q

10. w

About w

The w command is a quick way to see who is logged on and what they are doing.

Description

w displays information about the users currently on the machine, and their processes.

Difference between who and w

Who--

show who is logged on

w –

Show who is logged on and what they are doing.

Understanding w command output / header

The w command shows the following information about each user and their process on the system:

1. USER – User name.

2. TTY – Terminal type such as pts/0 or console.

3. FROM – The remote host name or IP address.

4. LOGIN@ – Login time.

5. IDLE – Idel time.

6. JCPU – The JCPU time is the time used by all processes attached to the tty.

7. PCPU – The PCPU time is the time used by the current process displayed in WHAT field.

8. WHAT – The command line of USER’s current proc

11. rm

The rm command removes (deletes) files or directories.

syntax

$ rm [ Option ] … FILE

Example:

1.Deleting a directory recursively & interactively.

# rm –ir dirname2. Deleting a file interactively.# rm –i filename3. How to Delete Empty Directories in Unix?

rmdir command will delete the empty directories. i.e directory without any sub-directories or files.

$ rmdir dirname

4.How to Delete Nested Empty Directories in Linux?

Use option -p, to delete nested directories as shown below.

$ rmdir -p dir1/dir2/dir3

5. Delete Directory which has Content $ rm –rf dirname6. Remove or delete a file $ rm linuxstuff.log7. Delete multiple files at once.$ rm file1.txt file2.txt file3.txt file4.txt

12. unlink

The unlink command calls and directly interfaces with the unlink system function, which removes a specified file.

Syntax:

Unlink filename

Delete Symbolic Link File

The two commands given below are used to delete link:

#rm linkname

Or

#unlink linkname

Example

$ mkdir dirfoo$ ln -s dirfoo lnfoo$ rm lnfoo/rm cannot remove directory ‘lnfoo/’ : Is a directory$ unlink lnfoo/unlink: cannot unlink ‘lnfoo/’: Not a directory$ unlink lnfoo$

Delete Symbolic Link Directory. When using the rm or unlink command to remove a symbolic link from a directory, make sure you don't end the target with a '/' character because that will create an error.

13.ulimit

The number you will see, shows the number of files that a user can have opened per login session. The result might be different depending on your system.

For example on a CentOS server of mine, the limit was set to 818354, while on Ubuntu server that I run at home the default limit was set to 176772.

Check Hard Limit in Linux

#ulimit –Hn

4096

Check Soft Limits in Linux# ulimit –Sn 1024

-a All current limits are reported

-c The maximum size of core files created

-d The maximum size of a process's data segment

-e The maximum scheduling priority ("nice")

-f The maximum size of files written by the shell and its

children

-i The maximum number of pending signals

-l The maximum size that may be locked into memory

-m The maximum resident set size (has no effect on Linux)

-n The maximum number of open file descriptors (most systems

do not allow this value to be set)

-p The pipe size in 512-byte blocks (this may not be set)

-q The maximum number of bytes in POSIX message queues

-r The maximum real-time scheduling priority

-s The maximum stack size

-t The maximum amount of cpu time in seconds

-u The maximum number of processes available

14. chmodchmod is used to change the permissions of files or directories.

Let's say you are the owner of a file named myfile, and you want to set its permissions so that:

1. the user can read, write, and execute it;

2. members of your group can read and execute it; and

3. others may only read it.

This command will do the trick:

$Chmod u=rwx, g=rx, o=r myfile

This is an example of using symbolic permissions notation. The letters u, g, and ostand for "user", "group", and "other". The equals sign ("=") means "set the permissions exactly like this," and the letters "r", "w", and "x" stand for "read", "write", and "execute", respectively. The commas separate the different classes of permissions, and there are no spaces in between them.

Here is the equivalent command using octal permissions notation:

chmod 754 myfile

Here the digits 7, 5, and 4 each individually represent the permissions for the user, group, and others, in that order. Each digit is a combination of the numbers 4, 2, 1, and 0:

· 4 stands for "read",

· 2 stands for "write",

· 1 stands for "execute", and

· 0 stands for "no permission."

So 7 is the combination of permissions 4+2+1 (read, write, and execute), 5 is 4+0+1(read, no write, and execute), and 4 is 4+0+0 (read, no write, and no execute).

1. Add single permission to a file/directory

Changing permission to a single set. + symbol means adding permission. For example, do the following to give execute permission for the user irrespective of anything else:

$ chmod u+x filename2. Add multiple permission to a file/directory

Use comma to separate the multiple permission sets as shown below.

$ chmod u+r,g+x filename3. Remove permission from a file/directory

Following example removes read and write permission for the user.

$ chmod u-rx filename4. Change permission for all roles on a file/directory

Following example assigns execute privilege to user, group and others (basically anybody can execute this file).

$ chmod a+x filename

chmod

The chmod command is used to change the permissions of a file or directory. To use it, you specify the desired permission settings and the file or files that you wish to modify. There are two ways to specify the permissions, but I am only going to teach one way.

It is easy to think of the permission settings as a series of bits (which is how the computer thinks about them). Here's how it works:

rwx rwx rwx = 111 111 111rw- rw- rw- = 110 110 110rwx --- --- = 111 000 000

and so on...

rwx = 111 in binary = 7rw- = 110 in binary = 6r-x = 101 in binary = 5r-- = 100 in binary = 4

Here is a table of numbers that covers all the common settings. The ones beginning with "7" are used with programs (since they enable execution) and the rest are for other kinds of files.

Value

Meaning

777

(rwxrwxrwx) No restrictions on permissions. Anybody may do anything. Generally not a desirable setting.

755

(rwxr-xr-x) The file's owner may read, write, and execute the file. All others may read and execute the file. This setting is common for programs that are used by all users.

700

(rwx------) The file's owner may read, write, and execute the file. Nobody else has any rights. This setting is useful for programs that only the owner may use and must be kept private from others.

666

(rw-rw-rw-) All users may read and write the file.

644

(rw-r--r--) The owner may read and write a file, while all others may only read the file. A common setting for data files that everybody may read, but only the owner may change.

600

(rw-------) The owner may read and write a file. All others have no rights. A common setting for data files that the owner wants to keep private.

15. umask

About umask

Return, or set, the value of the system's file mode creation mask.

Description

On Linux and other Unix-like operating systems, new files are created with a default set of permissions. Specifically, a new file's permissions may be restricted in a specific way by applying a permissions "mask" called the umask. The umask command is used to set this mask, or to show you its current value.

What Are Permissions, And How Do They Work?

As you may know, each file on your system has associated with it a set of permissions which are used to protect files: a file's permissions determine which users may access that file, and what type of access they have to it.

There are three general classes of users:

1. The user who owns the file ("User")

2. Users belonging to the file's defined ownership group ("Group")

3. Everyone else (“Other”)

In turn, for each of these classes of user, there are three types of file access:

1. The ability to look at the contents of the file ("Read")

2. The ability to change the contents of the file ("Write")

3. The ability to run the contents of the file as a program on the system ("Execute")

So, for each of the three classes of user, there are three types of access. Taken together, this information makes up the file's permissions.

How Are Permissions Represented?

There are two ways to represent a file's permissions: symbolically (using symbols like "r" for read, "w" for write, and "x" for execute) or with an octal numeric value.

For example, when you list the contents of a directory at the command line using the ls command as follows:

ls -l

you will see (among other information) the file permission information for each file. Here, it is represented symbolically, which will look like the following example:

-rwxr-xr--

There are ten symbols here. The first dash ("-") means that this is a "regular" file, in other words, not a directory (or a device, or any other special kind of file). The remaining nine symbols represent the permissions: rwxr-xr--. These nine symbols are actually three sets of three symbols each, and represent the respective specific permissions, from left to right:

symbols meaning

rwx the file's owner may read, write, or execute this file as a process on the system.

r-x anyone in the file's group may read or execute this file, but not write to it.

r-- anyone at all may read this file, but not write to it or execute its contents as a process.

Octal method

Using this method relative permission is provided to files and directories from a scale of 0 to 7 as per the table below

The minimum and maximum UMASK value for a folder is 000 and 777

The minimum and maximum UMASK value for a file is 000 and 666

CalculationFollow the below table for default umask value applied on directories

The below table is for default umask value applied on all the files

16. chown

The chown command changes the owner and owning group of files. Use the chmod command to change file access permissions such as read, write, and execute.

The chown command is most commonly used by Unix/Linux system administrators who need to fix a permissions problem with a file or directory, or many files and many directories. (Where "chown" means "change owner", "chgrp" means "change group".).

Example:

1. If you just want these files to be owned by the user nobody, you'd use this command:

$chown nobody *txt

2. How can I make mark as the owner of the file?

$ chown mark file1

The above command the group owner will not be changed.

3. How can I change ownership of multiple files to jim user?

$ chown jim /path/to/file1 /path/to/file2 /path/to/file3

or

$ chown jim /path/to/{file1, file2, file3}

17. chgrp

This is sister command to chown which is used to change owner of the file/folder as well as group name associated with that file.

Example 1:

Change the owning group of the file file.txt to the group named hope.

$chgrp hope file.txt

Example2: Give access permissions to a command so that the command can be executed by all users belonging to apache-admins

chgrp apache-admins /etc/init.d/httpd

Example3: Change group ownership all the files located in /var/apache to group:apache

chgrp -R apache /var/apache

Example4:Change group ownership forcefully

chgrp -f apache /var/apache

DIFFERENCE BETWEEN CHOWN AND CHGRP

1) chown command is used to change ownership as well as group name associated to different one, where as chgrpcan change only group associated to it.

2) Many people say that regular user only able to use chgrp to change the group if the user belongs to them. But it’s not true a user can use chown and chgrp irrespective to change group to one of their group because chown is located in /bin folder so every can use it with some limited access.

 Usages of chgrp command:

1) Used to change group ownership from one group to other group for a file/folder

2) As a security measure if you want to give permissions to a command to some group you can use this command.

18. id

Prints real and effective user and group IDs.

How to use it

By default, id command is installed on most of Linux system. To use it, just type id on your console. Typing id without no options will result as below. The result will use the active user.

$ id

Here’s how to read the output :

User pungki has UID number = 1000, GID number = 1000

User pungki is a member of the following groups :

pungki with GID = 1000adm with GID = 4cdrom with GID = 24sudo with GID = 27dip with GID = 30plugdev with GID = 46lpadmin with GID = 108sambashare with GID = 124

Using id with options

There are some options that can applied to id command. Here’s some options that may useful on day-to-day basis.

Print user name, UID an all the group to which the user belongs

To do this, we can use -a option

$ id -a

Output all different group IDs (effective, real and supplementary)

We can use -G option to do fulfill this.

$ id -G

The result will only show the GID numbers. You can compare it with /etc/group file. Here’s a sample of

/etc/group content :

root:x:0:daemon:x:1:bin:x:2:sys:x:3:adm:x:4:pungkifax:x:21:voice:x:22:cdrom:x:24:pungkifloppy:x:25:tape:x:26:sudo:x:27:pungkiaudio:x:29:pulsedip:x:30:pungkiwww-data:x:33:backup:x:34:operator:x:37:sasl:x:45:plugdev:x:46:pungkissl-cert:x:107:lpadmin:x:108:pungkisaned:x:123:sambashare:x:124:pungkiwinbindd_priv:x:125:

Output only the effective group ID

Use -g option to output only the effective group ID

$ id -g

Print the specific user informationWe can output a specific user information related UID and GID. Just put the user name after id command.

$ id leni

Above command will print UID and GID of user named leni.

19. diff

The UNIX diff command compares the contents of two text files and outputs a list of differences. This command can also verify that two files contain the same data. The syntax is relatively simple:

$diff [options] file1 file2

Let's say we have two files, file1.txt and file2.txt.

If file1.txt contains the following four lines of text:

I need to buy apples.

I need to run the laundry.

I need to wash the dog.

I need to get the car detailed.

...and file2.txt contains these four lines:

I need to buy apples.

I need to do the laundry.

I need to wash the car.

I need to get the dog detailed.

...then we can use diff to automatically display for us which lines differ between the two files with this command:

$diff file1.txt file2.txt

O 2,4c2,4

< I need to run the laundry.

< I need to wash the dog.

< I need to get the car detailed.

---

> I need to do the laundry.

> I need to wash the car.

> I need to get the dog detailed.

Output:

In our output above, "2,4c2,4" means: "Lines 2 through 4 in the first file need to be changed in order to match lines 2 through 4 in the second file."

Lines preceded by a < are lines from the first file;

lines preceded by > are lines from the second file.

20. sed (Refer page no.4)

21.cmp

cmp is used to compare two files byte by byte. If a difference is found, it reports the byte and line number where the first difference is found. If no differences are found, by default, cmp returns no output.

22. comm

Compare two sorted files line-by-line.

EXAMPLES

Two provide examples for this command, lets consider two files :

1. Simple Command Usage

2. Suppress first column

3. Suppress second column

4. Suppress third column

23. Introduction to pipes

You can connect two commands together so that the output from one program becomes the input of the next program. Two or more commands connected in this way form a pipe. To make a pipe, put a vertical bar (|) on the command line between two commands.

24.Backup Commands:

1. tar

The Linux “tar” stands for tape archive, which is used by large number of Linux/Unix system administrators to deal with tape drives backup. The tar command used to rip a collection of files and directories into highly compressed archive file commonly called tarball or tar

1. Create tar Archive File

c – Creates a new .tar archive file.

v – Verbosely show the .tar file progress.

f – File name type of the archive file.

2. Create tar.gz Archive File

To create a compressed gzip archive file we use the option as z.

3. Untar tar Archive File

To untar or extract a tar file, just issue following command using option x (extract)

4. List Content of tar Archive File

To list the content of tar archive file, just run the following command with option t (list content).

2.cpio(copy in, copy out)

cpio is a tool for creating and extracting archives, or copying files from one place to another.(for example, *.cpio or *.tar files).

1. Create *.cpio Archive File

You can create a *.cpio archive that contains files and directories using cpio -ov

2. Extract *.cpio Archive File

cpio extract: To extract a given *.cpio file, use cpio -iv as shown below.

Note: i - for extract d-for make directories v-for verbose o-for creative O-for archieve t-for list

3. Create *.cpio Archive with Selected Files

The following example creates a *.cpio archive only with *.c files.

4. Extract *.tar Archive File using cpio command

You can also extract a tar file using cpio command as shown below.

5. View the content of *.tar Archive File

To view the content of *.tar file, do the following.

3. zip

Common uses for zip files include the need to save space and for copying a large number of files from one place to another. If you have 10 files which are all 100 megabytes in size and you need to transfer them to an ftp site, then depending on your upload speed this could take a considerable amount of time. If you compress all 10 files into a single zip archive and the compression reduces the file size to 50 megabytes per file, then you only have to transfer half a gigabyte of data instead of the full gigabyte required to send an uncompressed file.

The syntax of zip command is

The options of zip command are

1. HOW TO INSTALL ZIP AND UNZIP COMMAND IN LINUX?

2.Zipping individual files

3. Extracting files from zip

$unzip abc.zip

4. Removing file from a zip file

$zip –d abc.zip file1

5. Update existing zip file

$zip –f abc.zip

4. unzip(decompressing files using the Linux unzip command.)

1. If the unzip command isn't already installed on your system, then run:

$sudo apt-get install unzip

2. If you want to extract to a particular destination folder, you can use:

$unzip file.zip -d destination_folder

3. How To Decompress A Single Zip File Into The Current Folder

$unzip filename

4. Decompressing Multiple Files

$unzip file1 file2 file3

5.Show Detailed Information About A Compressed File

$unzip –v abc.zip

The verbose output contains the following information:

Length in bytes

Method

Size

Compression %

Date and Time Created

CRC

Name

5. mount

On Linux, UNIX, and similar operating systems, file systems on different partitions and removable devices like CDs, DVDs, or USB flash drives can be attached to a certain point (that is, the mount point) in the directory tree, and detached again. To attach or detach a file system, you can use the mount or umount command respectively

1. Listing Currently Mounted File Systems

2. Listing Currently Mounted ext3 File Systems

$mount –t ext3(type)

3. Mount a CD-ROM

$mount –t /dev/cdrom /mnt

4.Mount a Floppy Disk

$mount /dev/fd0 /mnt

6. umount

umount stands for unmount, which unmounts the file system. Use umount to unmount a device / partition by specifying the directory where it has been mounted.

1.unmount a file system

2. Forcefully umount a busy device

[or] Still busy device $umount –l /mnt

NBKRIST III B.TECH CSE-A PREPARED BY: BSRPage 47