Aller au contenu

Linux commands

Here is a summary (sometimes Cheatsheet) of the main command used on a Linux environment.

For Files Operations

du and tree -du (Estimate Space Usage With Depth Levels)

du is a popular tool to quickly list files, folders, and subfolders up to a certain depth (using --max-depth=N or -d N) including their sizes and other information. Here, -a can be used to include both files and folders. -h converts sizes into more human-readable formats (e.g., M megabytes, K kilobytes).

$ du -ah --max-depth=2
7.5M    ./payments/dist
64K     ./payments/lib
74M     ./payments/node_modules
28K     ./payments/src
60K     ./payments/test
82M     ./payments
...
108K    ./transaction_engine/dist
44K     ./transaction_engine/lib
62M     ./transaction_engine/node_modules
16K     ./transaction_engine/src
0       ./transaction_engine/test
62M     ./transaction_engine

We can also use the modified tree command with --du and -h to view the same information in a more organized tree-like structure. Here, the depth can also be denoted (using -L N):

$ tree -L 2 --du -h
.
├── [ 36K]  payments   ├── [2.4K]  deploy.sh   ├── [4.0K]  dist   ├── [4.0K]  lib   ├── [4.0K]  node_modules   ├── [1.5K]  package.json   ├── [ 356]  readme.md   ├── [4.0K]  src   ├── [4.0K]  test   └── [ 406]  tsconfig.json ......
└── [ 30K]  transaction_engine
    ├── [2.4K]  deploy.sh
    ├── [4.0K]  dist
    ├── [4.0K]  lib
    ├── [4.0K]  node_modules
    ├── [ 570]  package.json
    ├── [ 658]  readme.md
    ├── [4.0K]  src
    └── [ 397]  tsconfig.json

TIP: Use a custom alias in a .bashrc file to simplify tree.

$ alias t="tree --du -h -L"
$ t 2
.
├── [ 36K]  payments   ├── [2.4K]  deploy.sh   ├── [4.0K]  dist   ├── [4.0K]  lib   ├── [4.0K]  node_modules   ├── [1.5K]  package.json   ├── [ 356]  readme.md   ├── [4.0K]  src   ├── [4.0K]  test   └── [ 406]  tsconfig.json ......
└── [ 30K]  transaction_engine
    ├── [2.4K]  deploy.sh
    ├── [4.0K]  dist
    ├── [4.0K]  lib
    ├── [4.0K]  node_modules
    ├── [ 570]  package.json
    ├── [ 658]  readme.md
    ├── [4.0K]  src
    └── [ 397]  tsconfig.json

ncdu (NCurses Disk Usage)

This tools is more interactive and quickly provide the needed informations. It is not part of the OS and has to be installed:

sudo apt-get install ncdu
Once installed, run ncdu followed by the path to the directory you want to analyze.
ncdu /path/to/directory

split (Split a File Into Pieces)

split lets you split a file into multiple pieces based on different criteria such as the number of chunks (-n), the number of lines per chunk (-l) etc. It is really helpful when you need to reduce the size of large log files quickly.

// Split by number of equal chunks
$ split -n 2 test test_copy_
$ nl test_copy_aa
     1  Hello
$ nl test_copy_ab
     1  World
// Split by number of lines per chunk
$ split -l 100 logFileWith250Lines log_
$ nl log_aa | tail -n 1
     100  I'm Log Line #100
$ nl log_ab | tail -n 1
     100  I'm Log Line #200
$ nl log_ac | tail -n 1
     50  I'm Log Line #250
$ nl log_aa log_ab | tail -n 1
     200  I'm Log Line #200
$ nl log_aa log_ab log_ac | tail -n 1
     250  I'm Log Line #250

shred (A Secure Alternative For rm)

When you are gonna delete a file on Linux, the first thing that comes into mind would be rm. However, the issue is, when using rm, it updates the reference to the file that the OS knows about, and hence, the file will disappear from the current location, making it hidden from users. But the file will still reside somewhere on the hard disk for a period of time, and an advanced user will be able to recover and access those data very easily.

In cases where you want your deleted files to be completely removed from your system in an irrecoverable manner, especially when they contain sensitive information, you can use shred, which is a tool that can overwrite your file to hide its contents and can optionally delete it as well (-u removes the file).

// Commonly-used approach (not recommended)
$ rm passwords.txt
// Basic usage - shred
$ shred -u passwords.txt
// Useful options - shred
$ shred -zvu -n 5 passwords.txt
-z: add a final overwrite with zeros to hide shredding
-v: show progress (i.e. verbose)
-u: truncate and remove file after overwriting
-n: overwrite N times instead of the default (3)

Note: shred is a nice and simple tool to delete sensitive files quickly. There are other similar tools, for example, wipe (used for securely erasing files from magnetic memory), srm (secure-delete) (delete with advanced options)

> file.txt (Flush Content in a File)

Do you need to save a terminal output or log in to the same file repeatedly? Instead of running the pretty basic rm log.txt && touch log.txt, you can use > log.txt . This command literally flushes all the content in your file and provides a fresh copy to you.

$ cat user_logins.csv
user_A,8.19AM,US North
user_B,8.22AM,UK
user_C,8.32AM,Australia
$ > user_logins.csv
$ cat user_logins.csv
// ... file is empty now ...

cat (Concatenate Files and Print) and tac (cat in Reverse Order)

cat is widely used to concatenate files and print the content on the standard output. Now, if you want to change the order of cat output, then you can use tac.

$ cat user_logins.csv
user_A,8.19AM,US North
user_B,8.22AM,UK
user_C,8.32AM,Australia
$ tac user_logins.csv
user_C,8.32AM,Australia
user_B,8.22AM,UK
user_A,8.19AM,US North

nl (Standard Output With Numbered Lines)

In order to have numbered lines in the output of cat command, you can use cat -n or just simply — nl , which is shorter and dedicated to the exact purpose.

$ cat test.txt
Hello
World
$ cat -n test.txt 
     1  Hello
     2  World
$ nl test.txt
     1  Hello
     2  World
$ nl .bash_history | grep netstat
  1577  netstat -tulnp | grep 1383
  1915  netstat -tulnp | grep 1383
  1916  netstat -tulnp | grep 6379
Note: If you need numbered lines with less, you can simply use less -N <fileName>. To use numbered lines with vi editor, first open the file with vi (vi <fileName>) and use :set nu. If you know the line number before opening the file, just type vi +12 <fileName> to jump into that line directly (e.g. 12) when the file is open.

sort (Sort Content in a File)

If you want to sort lines of a file, this tool does exactly that.

$ cat zip_codes.txt
94801 Richmond
94112 San Francisco
90210 Beverly Hills
94102 San Francisco 
95812 Sacramento
94112 San Francisco

$ sort zip_codes.txt
90210 Beverly Hills
94102 San Francisco
94112 San Francisco
94112 San Francisco
94801 Richmond
95812 Sacramento

uniq (Omit Repetitions and Display Only Unique Lines)

If you are interested in reducing the output into unique lines or getting only duplicates (-d), uniq provides a lot of options for that. Here, -c provides the occurrences count of each repeated line.

$ sort zip_codes.txt | uniq -c
1 90210 Beverly Hills
1 94102 San Francisco
2 94112 San Francisco
1 94801 Richmond
1 95812 Sacramento

$ sort zip_codes.txt | uniq -c -d
2 94112 San Francisco

head (Display Beginning of a File) and tail (Display End of a File)

Do you want to see a certain number of lines from the beginning/end of a file? head and tail serve that purpose. You can use -n to specify the preferred number of lines and -f to display the appended data as the file grows dynamically (i.e., do not stop printing, aka follow).

$ head -n 2 user_logins.csv
user_A,8.19AM,US North
user_B,8.22AM,UK

$ tail -n 2 user_logins.csv
user_B,8.22AM,UK
user_C,8.32AM,Australia

column -t [Format Content Into Columns]

When you have a set of lines with each line separated into some order by whitespace or some delimiter (e.g. ,,|,_ ), you can use column -t to print them onto standard output with columns — just like a table (-t). The default delimiter, in this case, would be whitespace (e.g. mount | column -t), but you can specify any other character by using -s (e.g. cat user_logins.csv | column -t -s,).

// with delimiter
$ cat user_logins.csv
user_A,8.19AM,US North
user_B,8.22AM,UK
user_C,8.32AM,Australia
$ cat user_logins.csv | column -t -s,
user_A     8.19AM     US North
user_B     8.22AM     UK
user_C     8.32AM     Australia
// without delimiter (default to whitespace)
$ mount
/dev/sdb on / type ext4 (rw,relatime,discard,errors=remount-ro,data=ordered)
tmpfs on /mnt/wsl type tmpfs (rw,relatime)
tools on /init type 9p (ro,relatime,dirsync,aname=tools;fmask=022,loose,access=client,trans=fd,rfd=6,wfd=6)
...
$ mount | column -t
/dev/sdb     on  /                          type  ext4         (rw,relatime,discard,errors=remount-ro,data=ordered)
tmpfs        on  /mnt/wsl                   type  tmpfs        (rw,relatime)
tools        on  /init                      type  9p           (ro,relatime,dirsync,aname=tools;fmask=022,loose,access=client,trans=fd,rfd=6,wfd=6)
....

less (Display Content of a File)

less is a very widely-used tool to display the content of large files in a scrollable and filterable way. It has so many features for pattern search and advanced navigation. The following are a few of the most frequently-used options you can start with.

$ less -N file.txt
-N - Show line numbers
&pattern - display only lines with pattern
? - search a pattern backward e.g. ?/src/payments/
/ - search a pattern forward e.g. /\/src\/payments\/
n - jump to the next match forward
N - jump to the previous match backward
G - jump to the end of file
g - jump to the start of file
10j - jump 10 lines forward
10k - jump 10 lines backward

Other Useful Linux Commands for File Operations

pwd =  print working directory
cd <directory> = change directories and navigate the file system (e.g. cd ~/logs/)
cd = jump to user's home directory 
cd .. = jump to upper directory level 
cd ../../.. = jump 3 directory levels up
cd - = jump back to the previous directory
ls = list files and folders in current directory
ls -alh = -a: include hidden (e.g. .ssh), -l: log listing detailed info, -h: size in human-readable format (e.g. MB, KB)
ls -s = -s: sort by size
ls -t = -t: sort by time
touch <file1> = create an empty file (e.g. touch file.txt)
mv <location1/file1> <location2/> = move file between locations (file1 goes inside location2)
mv <location1/file1> <location2/file2> = move file between locations and rename (file1 goes inside location2 and becomes file2)
mv <location1/file1> . = move file to current directory (file1 goes inside current directory location)
mv <location1/*> <location2/> = move all content inside one location to another location (content inside location1 goes inside location2)
mv <existing_folder1/> <existing_folder2/> = move folder location (existing_folder1 goes inside existing_folder2)
mv <existing_folder1/> <new_folder/> = rename folders (existing_folder1 becomes new_folder)
mv <existing_file1> <new_file> = rename files (existing_file1 becomes new_file)
mv <location1/file1> <location2/> = copy file between locations (a copy of file1 goes inside location2)
mkdir folder1 = create a folder in current directory
mkdir -p folder1/folder2/folder3 = create entire folder path if it doesn't exist
rm file1 = remove a file
rm -rf folder1 = remove an entire folder
find -name file1 = search files inside current folder and its subfolders by file name
find <location1> -name file1 = search files inside some location by file name
find -size +100M = search files inside current folder and its subfolders by file size
find -user root = search files inside current folder and its subfolders by file owner
locate file1 = search files across the entire system
sudo !! - rerun the last command as root
<cmd1> | xargs <cmd2> = pass output from one command to arguments of another
<cmd1> | tee file1 = send output of a command/script to both standard output on terminal and a specific file
<cmd1> > file1 = send output of a command only to a specific file
ln <location1/file1> link = create a symbolic link from on file to current location (link goes inside current directory)
df = display size of free disk space
Keyboard Shortcuts
CTRL + L = clear the currently-typing screen and get a fresh terminal (similar to typing `clear`)
CTRL + U = clear the currently-typing command and get a fresh line
CTRL + R = reverse-search the past commands recorded in bash history
TIP: Use man-db (or similar tool) to explore all available options for the above tools.
$ sudo apt install man-db
$ man du
$ man tree
$ man tail
$ man sort 
$ man less

Dernière mise à jour: 5 December 2023