Notes
Table of Contents
- 1. Linux 1
- 1.1. Class
- 1.2. Home
- 1.2.1. Kernel
- 1.2.2. Aliases
- 1.2.3. less commands:
- 1.2.4. man page types
- 1.2.5. uname commands:
- 1.2.6. PATH directories
- 1.2.7. file types
- 1.2.8. Find [OPTIONS]… [starting-point…] [expression]
- 1.2.9. sed
- 1.2.10. The vi editor
- 1.2.11. Regular Expressions
- 1.2.12. Hardware reources
- 1.2.13. Device Management
- 1.2.14. Kernel Modules
- 1.2.15. The Booting Process
- 1.2.16. Bootloaders
- 1.2.17. Runlevels
- 1.2.18. Partitioning
- 1.2.19. Mounting Filesystems
- 1.2.20. Mounting Filesystems Automatically On Boot
- 1.2.21. The loop option
- 1.2.22. df
- 1.2.23. Filesystem Issues
- 1.2.24. du
- 1.2.25. tune2fs
- 1.2.26. Fixing the Filesystem
- 1.2.27. Managing Shared Libraries
- 1.2.28. Package management
- 2. Linux 2
- 2.1. Class
- 2.2. Home
- 2.2.1. User and System Account Files
- 2.2.2. Advanced Shell Features
- 2.2.3. Shell Scripts
- 2.2.4. X Window
- 2.2.5. Graphical Desktops
- 2.2.6. Installing the Desktop Environment
- 2.2.7. Localization
- 2.2.8. Remote Desktop Environments
- 2.2.9. XDMCP
- 2.2.10. RDP
- 2.2.11. Accessibility
- 2.2.12. Scheduling Jobs
- 2.2.13. Localization
- 2.2.14. System Time
- 2.2.15. System Logging
- 2.2.16. Email Configuration
- 2.2.17. Printer Management
- 2.2.18. Networking Fundamentals
- 2.2.19. Network Configuration
- 2.2.20. Network Troubleshooting
- 2.2.21. Account Security
- 2.2.22. Host Security
- 2.2.23. Encryption
1. Linux 1
1.1. Class
1.1.1. Course 1
- Kdump
- feature of linux kernel that creates crash dumps in the event of a kernel crash. When triggered, kdump exports a memory image (also known as vmcore) that can be analyzed for the purposes of debugging and determining the cause of a crash.
- LVM
- Logical Volume Management (Better suited for servers and higher capacity partitions)
1.1.2. Course 2
- “./” = current location in the terminal
- “../”= previous location in the terminal
- mkdir -p Folder{1..10}/Subfolder{1..5} creates 10 folders and in each of them it creates 5 Subfolders
1.1.3. Course 3
- echo `ls` shows what the command does, not the text itself
- echo ’COMMAND’ ignores special characters and displays the text itself
- touch two\ names ignores the special character in front of it (in this case /space/)
- if you copy with sudo, the new owner is root
1.1.4. Course 4
- Filesystem
- /bin = binary files that are executable
- /dev/full = it’s a full directory that tells the system that it’s full
- /dev/random = it generates random bits (0 1)
- _/dev/zero_ = generates empty directories that use disk space (virtual ram memory, buffer memory)
- /dev/null = everything that get redirected to it will get transformed into nothing
- 23,MB ->23.6x1020
- /etc = it includes all the system configs
- lib,lib64 = libraries
- /mnt = mounts different kind of devices
- /media = mounts the kernel
- opt = optional software (made by us, third party)
- /proc = processes
- /run = configs for the running services
- /srv = services installed by us
- /sys = system files
- /tmp = temporary files
- /usr = user files
- /var = variables (critical to system functions, cookies etc)
- /root = home directory for root
- Text editing
- nl = numbers the lines in a file, but doesn’t alter it
- paste = combines 2 text files (paste -d , (paste with a delimitator))
- pipe | = redirects the output of the first command into an input for the second
- cut = extracts info from a text (ex: ll | cut -d. -f1)
- sort = sorts items
- uniq = ignores all the duplicates entries
- od -x file.txt = transforms the text into hex decimal characters
- | tr ’a-b’ ’A-Z’ = transforms from pattern to pattern
- wc | how many words, lines, etc
1.1.5. Course 5
- Regular Expressions
- grep
- grep e..e finds words based on the pattern mentioned, the dots are the charaters
- grep -w e..e finds only complete/full words
- grep [a,b,c] searches for all the words that have the letters mentioned
- grep [a,b,c] ignores “a”
- grep -i [a,b,c] ignores the upper case letters (case sensitive)
- grep ’e*’ 0 or more characters that contain e
- grep ’im’ searches at the front of the line
- grep ’im$’ searches at the end of the line
- grep -E (extended) or egrep
- grep -E ’b([e,t])’ searches for an input that starts with b and continues with e or t
- grep -E ’b([e,t])+’ the subpattern is repeated for 1 or more times, basically it finds the whole word
- “*” the subpattern is repeated for 0 or more times
- “?” the subpattern is repeated for 0 or maximum 1 time
- grep -E ’b(e?)’ it only find 1 e
- grep -E ’b(e*)’ it find all the e in the pattern
- {} sets the min and max of how many times the pattern repeats
- grep -E ’a{1.3}’ it finds the a for minimum 1 time and maximum 3 times
- | this means or, find pattern1, if not find pattern2, if not find pattern3
- grep -E ’Nokia|Atos|Juniper’ finds Nokia or Atos or Juniper or all the them
- grep
1.1.6. Course 6
- Redirections
- | (pip) it gives info to another command
- > (stdout) it sends the info from a command to a file, it also overwrites everything in the file
- >> (apend) it sends the info from a command to a file, but it adds it to the file instead of overwriting it
- 2> (stdout error) it send the errors of a command to a file
- < (stdinput) it inserts the text from a file in a command for further use
- tee = pipe on steroids, it shows the output from the command in the terminal and it sends it to a file in the same time
- Managing Processes
- Everything is a process
- A bigger more complex process might need subprocesses
- ps (process status) it lists processes in your current terminal
- ps -aux it shows all the processes
- ps -ef (same with aux, but with less info)
- PPID (Parent Process Id) the id of the parent process
- pgrep (grep for processes) pgrep -a to show more info
- watch it shows a command in real time (-nx to refresh the process at the given rate)
- top (basically a task manager, but the shittiest one, use htop)
- zombie process a process that doesn’t have a parent process (usually because it crashed)
- Foreground Processes
- It prevents the user from using the terminal until the process is complete
- sleep 100 & it puts the process in foreground and it let’s you use the terminal
- A process that is put by the user from foreground to background are called jobs
- jobs (the command) it shows the running jobs
- fg 2 it will put the job in foreground
- bg 2 it will put the process in background
- ctrl+z it suspends a process
- kill manages the processes (kill -l to see all the options, IMPORTANT: 18, 19, 9)
- killall kills everything
- kill -1 (SIGHUP) basically a logout signal
- nohup [NAME] it wil prevent that process from being stopped
- the priority is modified by the use with NI (Nice Value)
- if the NI is low the priority is high
- if the NI is high the priority is low
- NI takes values between -20 and 19 (-20 is the max value and 19 is the min)
- in top you can press acess kill with k and modify the nice value with n
- uptime shows the uptime and aditional info
- renice can adjust the nice value directly from the terminal
- nice will set a starting value for the nice value, you gonna you renice anyway so fuck this
1.1.7. Course 7
- Archive commands
- an archive means combining all the folders into one file
- gzip
- gzip file, it compresses the file in a .gz format and it replaces the original file (by deafult), it has a 56% compression rate
- zcat will read compressed text files
- gzip -d or gunzip file will decompress a file
- gzip -l, shows info about the archive
- it’s a lossless compression format
- bzip2
- bzip2 file, compresses a file in the .bz2 format, it replaces the original file by default
- it’s still lossless
- shittier than gzip
- to see the contents of the file use bzcat
- decompression is the same, bzip2 -d file or bunzip2 file
- xz
- xz file
- xzcat will show the contents of the archived file
- unxz or xz -d will decompress the file
- it’s still lossless
- _tar_
- tar does not compress files by default
- it has 3 options, -c create, -t list, -x extract
- compression options, -z gzip, -j bzip2, -J xz
- tar -czf name of the archive.tar[gz] directorul
- -f will link the file
- you HAVE to put tar.gz, tar.bz, tar.xz at the end
- Example: tar -czf name.tar.gz file
- tar -xzf file -C ../ with -C you can change the directory where you want the file to be extracted
- tar -xzf file file/subfolder, you can dearchive subdirectories from the tar archive. You list them with -t
- zip
- zip file
- it uses gzip to compress
- unzip to dearchive
- zip -r File.zip File
- unzip name -d ../, use -d to change directory
- cpio
- ls | cpio -ov > archive.cpio it archives all the files in the directory to one file
- ls name.txt | cpio -i > ./archive.cpio, it will add a file to the current archive
- cpio -idv < path/to/archive.cpio will dearchive the files
- dd
- it copies bit by bit
- dd if (input file) dd of (output file) path bs (bit size)
- Example dd if=/dev/zero of=./emptyfile bs=1M count=710
- dd also copies bit by bit
- File Permissions
for reference ll
- “-” means file
- d means directory
- first 3 lines are the permissions for the owner
- second 3 lines group are the permissions for the group
- the third group of 3 lines are the permissions for other
- r = read, w = write, x = execute (in this order)
- all directories have the x permission
- chmod to change the permissions, - means it deletes a premission, + adds a permission, = it replaces all premissions with one value
- chmod u+r adds the r permission for the user
- chmod g+r adds the r permission to the group
- chmod o+r adds the r permissions to other
- r = 4 , w = 2 , x = 1
- chmod 740 file user has all permissions, group has read permissions, other has none
- chown changes the owner of the file
- chgrp changed the group of the file
- chown user:group file
- setuid 4000 “S” Causes an executable file to execute under user owner identity, instead of the user running the command.
- chmod 4700 file the permission will have an “s”
- S is not executable for the owner
- s is executable for the owner
- setgid 2000 “S” same for groups Causes an executable file to execute under group owner identity, instead of the user running the command.
- stickybit 1000 “T” it prevents the file to be delete, only the owner or the root can delete it
- umask changed the default permissions of the newly created files, until a reboot or a disconnect from the server
1.1.8. Course 8
- Hardware
- to see cpu into do cat /proc/cpuinfo
- you can virtualize a vm in “full” mode or “segmented” mode
- with full you use the whole cpu, with segmented you use a part of it
- with segmented you will make sure that the host has always x amount of cores
- RAM stores temporarily system and program data every program lives in ram
- creating swap: dd if=/dev/zero of=./swapmem bs=1M count=1024 create an empty file of the desired size mkswap swapmem sudo swapon swapmem
- Firmware
- Mass Storage Devices: SCSI (small ones), IDE (ssds), SATA (you know it), USB (yes)
- storage devices are either parallel or serial, (The ones with S are serial)
- usb-devices (wayy more detailed shit than lsusb)
- magistrala = bus
- daemon= service
- kernel modules (basically drivers), use lsmod for a list of them
- The boot process
- MBR is a list of partitions!!
- The BIOS has 3 main jobs:
- POST(power on self test), ensures hardware is functioning properly
- Enumerate available hardware such as memory, disks, and USB devices
- Find proper boot driver from the available storage devices and load the MBR
- Bootloader Stage:
- The MBR contains the first stage, whose purposes is to load the second stage
- The second stage loads the Linux kernel into memory and executes it.
- Kernel Stage:
- The kernel initializes the hardware drivers and get the root / filesystem mounted for the next stage
- The kernel typically lives in the /boot partittion
- As it boots it initializes the pid 1 process (init)
- The init Stage:
- Final booting Stage
- The First process of the operating system is started
- The init process has three important responsibilities:
- Continue the booting process to get services running, login screens displaying, and consoles listening.
- Start all other system processes.
- Adopt any process that detaches from its parent.
- initramfs:
- the initial root filesystem that Linux typically has access to.
- It’s the starter filesystem
- it’s a cpio archive, contents are unpacked by the kernel and loaded into ram
- After being unpacked, the kernel will launch the init script
- if you load drivers in the initramfs, the ram after the real root filesystem is mounted, can be freed and used for something else
- for logs you can use dmesg, including the very first boot ones
- journalctl logs for the services and it is more detailed
- Kernel messages and other system-related messages are typically stored in the /var/log/messages file. This file, which is considered the main log file, is alternatively named /var/log/syslog in some distributions.
- On a systemd-based system, the journald daemon is the logging mechanism, and it’s configured by the /etc/systemd/journald.conf
- The main log file on a systemd-based device is the /var/log/journal file for persistent logging or the /run/log/journal file for RAM-based and non-persistent logging.
- On System V systems, the main log file, usually the /var/log/messages file
1.1.9. Course 9
1.1.10. Course 10
- LVM is a container of other partitions, you can add new storage to them on the fly
- fdisk
- fdisk -l to list partitions
- fdisk /dev/sdax to enter a partition with fdisk
- press m for help
- a to toggle a bootable flag
- with b use network related stuff
- c toggle dos compatibility flag
- d delete partition
- F list free unpartitioned space
- n add a new partition
- p print the partition table
- t change partition type
- v verify partition table
- i print information about a partition
- I to load a script with sfdisk
- O to make a script with sfdisk
- w write the table
- q quit without saving
- G to make a GPT partition
- l to list all the partition types with their codes
- sfdisk
- used for scripting
- make table backups for partitions
- d to do a backup
- f to restore the backup
- gdisk
- it’s basically fdisk but for GPT
- LVM
- Logical Volume Management
- supports snapshots
- it groups more physical partitions into one logic volume
- create a physical volume with pvcreate
- vgcreate to create a volume group with one or more physical ones
- lvgcreate create a lvm volume
- to create a filesystem use mkfs
- mkswap to create a swap file
- you can create a swapfile with dd
- Mounting Filesystems
- use mount to mount the filesystems
- lsblk lists all the partitions
- use t to specify the type
- use o to specify additional options
- umount to unmount the filesystem
- lsof to list all diles that are open in the current filesystem
- fuser to see who uses the filesystem
- More on the home side for access codes (probably important)
- fuser -k will kill all processes, then you can unmount
- for a more forced approach use -k 9
- fstab automatically mounts the drives at boot (you gotta have to finally learn how to write it)
- the device can be set with uuid, label or device name (/dev/sda)
- at the end you have 1 or 0, these are dump fields
- the second one is used to check the filesystem at boot
- there’s also systemd-mount for some reason
- mount -o loop is used for .iso and .img
- df to usee the mounted filesystems
- make sure to check home section here as well
- Package Management
1.1.11. Course 11
- Adding a hdd to a LVM group
- Commands info
- pvs to list the physical volumes
- pvdisplay to display more info about physical volumes
- vgs or vgdisplay to see more info about volume groups
- lvs or lvdisplay to see info about LVM logical volumes
- Actual steps
- Create physical volumes: pvcreate /dev/sdb
- lvmdiskscan -l to see if your changes applied
- add your new pv to an existing lv: vgextend *vg name* /dev/sdb
- extend you lv to add more storage: lvm lvextend -l +100%FREE /dev/*vg name*/root (or any other)
- to enlarge the filesystem use: resize2fs -p /dev/mapper/*vg name*-root (or any other)
- Good Job!
- Commands info
- Check linux ip configuration on google, it’s good to know
- Virtualization
- A vm is a complete os installed on virtual hardware using a hypervisor
- docker uses the kernel to manage other operating systems (it’s basically a vm with no hypervisor)
- kubernetes is a docker automation, it creates and deploys containers
- ansible controls the containers (check network chuck)
- cloud-init and kickstart script can be used to configure images
- check out proxmox
/etc/network/interfaces to change ip configs permanently
1.2. Home
1.2.1. Kernel
- The kernel is the control hub for everything in the system (every task goes through it)
- API (Application Programming Interface)
- A process is just one task that is loaded and tracked by the kernel
1.2.2. Aliases
- alias /name/ = /command/
- command [options…] [arguments…]
1.2.3. less commands:
| Return (or Enter) | Go down one line |
|---|---|
| Space | Go down one page |
| /term | Search for term |
| n | Find next search item |
| 1G | Go to the beginning of the page |
| G | Go to the end of the page |
| h | Display help |
| q | Quit man page |
1.2.4. man page types
- Executable programs or shell commands
- System calls (functions provided by the kernel)
- Library calls (functions within program libraries)
- Special files (usually found in /dev)
- File formats and conventions, e.g. /etc/passwd
- Games
- Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7)
- System administration commands (usually only for root)
- Kernel routines [non-standard]
To see the sections of a file use man -f (whatis does the same thing) man -k [command] searches for man pages that match the given word
1.2.5. uname commands:
-a, –all print all information, in the following order, except omit -p and -i if un- known:
-s, –kernel-name print the kernel name
-n, –nodename print the network node hostname
-r, –kernel-release print the kernel release
-v, –kernel-version print the kernel version
-m, –machine print the machine hardware name
-p, –processor print the processor type (non-portable)
-i, –hardware-platform print the hardware platform (non-portable)
-o, –operating-system print the operating system
–help display this help and exit
–version output version information and exit
1.2.6. PATH directories
| /home/sysadmin/bin | A directory for the current user sysadmin to place programs. Typically used by users who create their own scripts. |
|---|---|
| /usr/local/sbin | Normally empty, but may have administrative commands that have been compiled from local sources. |
| /usr/local/bin | Normally empty, but may have commands that have been compiled from local sources. |
| /usr/sbin | Contains the majority of the administrative command files. |
| /usr/bin | Contains the majority of the commands that are available for regular users to execute. |
| /sbin | Contains the essential administrative commands. |
| /bin | Contains the most fundamental commands that are essential for the operating system to function. |
1.2.7. file types
(this means the first character in the ls -l commadn)
| Symbol | File Type | Descrtiption |
|---|---|---|
| d | directory | A file used to store other files. |
| l | symbolic link | Points to another file. |
| s | socket | Allows for communication between processes. |
| p | pipe | Allows for communication between processes. |
| c | character file | Used to communicate with hardware. |
| b | block file | Used to communicate with hardware. |
1.2.8. Find [OPTIONS]… [starting-point…] [expression]
| Example | Meaning |
|---|---|
| -iname LOSTFILE | Case insensitive search by name. |
| -mtime -3 | Files modified less than three days ago. |
| -mmin -10 | Files modified less than ten minutes ago. |
| -size +1M | Files larger than one megabyte. |
| -user joe | Files owned by the user joe. |
| -nouser | Files not owned by any user. |
| -empty | Files that are empty. |
| -type d | Files that are directory files. |
| -maxdepth 1 | Do not use recursion to enter subdirectories; only search the primary directory. |
1.2.9. sed
- sed ’s/PATTERN/REPLACEMENT/’ searches and replaces the first matching instance without modifying the original file
- sed -i ’s/PATTERN/REPLACEMENT/’ searches and replaces the first matching instance modifying the original file
- sed -i ’.original’ ’s/PATTERN/REPLACEMENT/’ creates a backup file (in this case it will create a file with the .original extension) then it searches and replaces the first matching instance modifying the original file
- sed ’s/PATTERN/REPLACEMENT/g’ makes the changes global, to all matching instances
- sed ’/PATTERN/i\TEXT\’ inserts a text before a pattern
- sed ’/PATTERN/a\TEXT\’ inserts a text after a pattern
- sed ’/PATTERN/d’ searchs all the lines that contain a pattern and then it deletes them (for example ’/a/d’ will delete all the lines that contain a)
- sed ’/PATTERN/c\REPLACEMENT’ it searches for a line that contains the pattern and it changes it to the replacement
- sed -e chains commands together, (sed -e ’command’ -e ’command’)
1.2.10. The vi editor
| Motion | Result |
|---|---|
| h | Left one character |
| j | Down one line |
| k | Up one line |
| l | Right one character |
| w | One word forward |
| b | One word back |
| ’^’ | Beginning of the line |
| ’$’ | End of the line |
| d | delete/cut |
| y | yank/copy |
| p | put/paste |
| dd | delete current line |
| 3dd | delete the next 3 lines |
| dw | delete the current word |
| d3w | deltet the next three words |
| d4h | delete 4 characters to the left |
| cc | change the current line |
| cw | Change the current word |
| c3w | Change the next three words |
| c5h | Change five charaters to the left |
| yy | yank the current line |
| 3yy | |
| yw | yank the current word |
| y$ | yank the end of the line |
| p | Put after the cursor |
| P | Put before the cursor |
| /word | Seatch for the word forward |
| ?word | Search for the word backward |
| a | enter insert mode after the cursor |
| A | enter insert mode at the end of the line |
| i | enter insert mode right before the cursor |
| I | enter insert mode at the beginning of the line |
| o | enter insert mode on a blank line after the cursor |
| O | enter insert mode on a blank line before the cursor |
| :e file | open a file |
| :1 | Go to line number 1 |
| :wq/ZZ | write and quite a document |
1.2.11. Regular Expressions
- Basic regex
Basic Regex Operator Meaning Period operator . Matches any one single character. List operator [] , [^ ] Defines a list or range of literal characters that can match one character. If the first character is the negation ^ operator, it matches any character that is not in the list. Asterisk operator * Matches zero or more instances of the previous character. Front anchor operator ^ If ^ is at the beginning of the pattern, then the entire pattern must be preset and the beginning of the line to match, if not, it is treated as a normal ^ character Back anchor operator $ If $ is the last character in the pattern, then the pattern must be at the end of the line to match, otherwise, it is treated as a literal $ character. _Observation_: You can use both ^ and $ at the same time to match the exact pattern (in the exact position as you wrote it, example: “Hello World$”)
- One of the most useful matching capabilities is provided by the period . operator. It will match any character except for the new line character ( example: “r..t” for “root” or any other word that matches the said pattern )
The list [ ] operator works in regular expressions similar to how they work in glob expressions; they match a single character from the list or range of possible characters contained within the brackets
_CAREFUL_: 1. the ^ symbol doesn’t have the same function here, if the ^ symbol is used in [] (like this “[^ ]”) then it means /negate/ which will search for anything but the pattern in the brackets
- In the brackets, every character is taken as a normal character, for example if you use “period” (.) it will be seen as a normal period, not as the matching symbol
The asterisk * operator is used to match zero or more occurrences of the character preceding it. For example, the e* pattern would match zero or more occurrences of the e character (for example “re*d” will match the words with 0 or more “e” between “r” and “d”)
Pattern Meaning abc* Matches the ab string followed by zero or more c characters a* Matches zero or more occurrences of the a character aa* Matches one or more occurrences of the a character [A-Z][aeiou]* Matches a single capital letter followed by zero or more vowel characters
- Extended Regex
Extendeded Regex Operator Meaning Grouping operator ( ) Groups characters together to form a subpattern. Asterisk operator * Previous character (or subpattern) is present zero or more times. Plus operator + Previous character (or subpattern) is present at least one or more times. Question mark operator ? Previous character (or subpattern) is present zero or one time (but not more). Curly brace operator {,} Specify minimum, maximum, or exact matches of the previous character (or subpattern). Alternation operator “shift \” (because it will fuck my table) Logical OR of choices. It will search for every inputed element in it - The grouping ( ) operator creates groupings that can be used for several purposes. At the most basic level, they are used to group together characters that can be targeted by matching operators like *, +, ?, or the curly braces { }.
This grouping is considered to be a subpattern of the pattern. A subpattern is a smaller pattern within a pattern. The matching operators *, ?, +, and { } that match single characters, can also be applied to subpatterns
_EXAMPLE_: In the example below, parentheses are used to match the M character, followed by the iss subpattern repeated zero or more times:
echo ’Miss Mister Mississippi Missed Mismatched’ | grep -E ’M(iss)*’ = *Miss *M* ister *Mississ* ippi *Miss* ed *M* ismatched
- e+ = ee*
Pattern Meaning xyz+ Matches the xy string followed by one or more of the z character (xyz)+ Matches one or more copies of the xyz string The extended regex question mark ? operator matches the preceding character or grouping zero or one times, making it optional. For example, consider the word color, which can also be spelled with an optional u as colour (depending on the English dialect being used). Use the colou?r pattern to match either spelling
Pattern Meaning xyz? Matches the xy string followed by zero or one of the z character x(yz)? Matches the x character followed by zero or one of the yz string The extended regex curly brace { } operator is used to specify the number of occurrences of the preceding character or subpattern.
Pattern Meaning a{0,} Zero or more a characters a{1,} One or more a characters a{0,1} Zero or one a characters a{5} Five a characters a{,5} Five or fewer a characters a{3,5} From three to five a characters - the {} curls can be used instead of the “*, +, ?” characters
_Example_:
- *= {0,}
- +={1,}
- ?= {0.1}
- When used in extended regular expressions, the alternation | operator separates alternative expressions that can match. It acts similarly to a Boolean OR
- abc|xyz = Matches the abc string or the xyz string
- ab(c|d|e) = Matches the ab string followed by a c or d or e character ab[cde]
- the {} curls can be used instead of the “*, +, ?” characters
_Example_:
- Regex Sequences
Backslash Sequence Pattern Equivalent Matches \b Word boundary operator \B Not a word boundary operator \w [A-Za-z0-9] Word character class \W [A-Za-z0-9] Not a word character class \d [0-9] Digit character class \s Whitespace character class § Not a whitespace character class \\ Literal backslash character - \b is used to determite words without proper spaces between them, it is very useful with sed _EXAMPLE_: sed ’s/is/was/’ will replace any “is” instance that it founds, no matter if is standalone or in a word like “this” sed ’s/\bis\b/was/’ will replace only the standalone “is” and will leav the words like “this” unchanged
- “\” can also be used to escape special characters (’re\*’ will match the pattern “re*” and ignore the special character)
- fgrep ignores all the special characters in the pattern (basically a globa “\”)
- Grep options
Option Meaning -i Case insensitive -v Invert search results (logically negates criteria) - returns all lines that don’t contain the specified pattern -l List the file name of content in file matches -r Perform a recursive search including subdirectories -w Match whole word only -q Quietly operate without producing output - the grep -v option inverts the search results matching only the things that are outside the pattern (the same function as the negate sign)
- grep ’Linux’ ./* will search for the word “linux” all the files that are present in the directory
- grep -l ’Linux’ ./* will make a list of all the files that have “linux” in them
1.2.12. Hardware reources
- IO Ports - Memory addresses that allow for communication with hardware devices. The current system addresses in use can be viewed by executing the following command: cat /proc/ioports
- IO Memory - A section or location that acts much like the RAM that is presented to the processor via the system bus. These are used to pass and store data as well as for access to devices on the system. IO memory information can be viewed by executing the following command: cat /proc/iomem
- Interrupt Requests (IRQ) - An interrupt is a hardware signal that pauses or stops a running program so that the interrupt handler can switch to running another program, or send and receive data. There are a set of commonly-defined interrupts called IRQ’s that map to common interfaces, such as the system timer, keyboard controller, serial and parallel ports, and floppy controllers. The /proc/irq directory contains configuration information for each IRQ on the system.
- Direct Memory Access (DMA) - A method by which particular hardware items in the system can directly access RAM, without going through the CPU. This speeds up access, as the CPU would otherwise be fully tasked during such access, making it unavailable for other tasks for the duration. DMA information can be viewed by executing the following command: cat /proc/dma
- lspci, lsusb (important commands!)
- usb-devices gives more detailed info on usbs from /proc and /sys
- you can also use the verbose (-v) option on lspci and lsubs to get a lot more useful info
- if you want info about only 1 device you can use the -s flag as well, for lsusb it’s -d and the vendor id
1.2.13. Device Management
- udev
- udev creates files (nodes) for the connected devices and it’s used to manage the hotpluging part, after a device is disconnected it removes the files to keep the folder clean
- Configuration files in the /etc/udev/rules.d directory are used to define rules that assign specific ownerships, permissions, and persistent names to these device files. These files allow a user to configure how udev handles the devices it manages.
- sysfs
- The sysfs subsystem is typically mounted as the /sys subdirectory. The /sys directory and sysfs exist because there is a need to provide information about the kernel, its attributes, and contents to users via programs such as ps, top, and other programs that provide information to the regular user through command line output.
- It’s like the /proc directory, which displays system information, but it’s way more tidy because of the way it’s designed (one item per file)
- HAL(Hardware Abstraction Layer)
- it’s a deprecated logging utility for the connected usb devices, udev took over sadly. lshal is how you get the info and you better use grep for your own sanity.
- D-Bus
- it’s a inter-process communitation method, used by a lot of desktop environments. This way they won’t need separate processes for communicating between them thus making it more stable and reliable
- D-bus it’s like a community highway for processes to use and and communicate between them, it’s all in one place. (Jokes aside, it’s called Interprocess Communication or for short IPC)
- it goes hand it hand with hald. Hald uses it to send all of it’s notifications around
Systemd typically uses udev for its device management tasks. The job of udev is to let your computer know of device events, among other tasks. A user will likely not have to deal with udev directly unless there are archaic or odd devices that need to be made available and manual configuration to be done.
Udev can manage any device that shows a link in the /dev directory when attached to the system, which udev is able to do through scripts known most commonly as udev rules. At their simplest, a udev rule is something that performs an action when a device is inserted, such as a thumb drive.
Udevadm is used to view info that would allow to directly specify a device within a udev rule when it’s attached, and then execute specific actions on that device
To watch what happens when a device is inserted or attached, run the following command: udevadm monitor
To query an already-attached device for the necessary information, execute the command: udevadm info /dev/sda
The two preceding commands and a short study of the udev man pages about rules will inform beginners and system administrators on how to create custom rules for devices.
1.2.14. Kernel Modules
- They are basically drivers for linux
- You can see all the modules installed by using the lsmod command
the way lsmod is structured is:
Name SIZE HOW MANY PROGRAMS USE THE MODULE NAME OF THE PROGRAMS THAT USE IT - You can use modinfo to get more info about a module
- To get a list of all available modules, use the modprobe -l command
- Normally, kernel modules are loaded automatically by the kernel. To load a module manually, execute the modprobe command with the name of the module.
- The modprobe command can also be used to remove modules from memory with the -r option.
1.2.15. The Booting Process
- It’s at the class section, read it.
- It is also possible to boot off the network through the Preboot Execution Environment (PXE). In the PXE system, a compatible motherboard and network card contain enough intelligence to acquire an address from the network and use the Trivial File Transfer Protocol (TFTP) to download a special bootloader from a server.
1.2.16. Bootloaders
- Grub Legacy
- In GRUB Legacy, the first disk detected is referred to as hd0, the second disk as hd1 and so on. Partitions on disks are also numbered starting at zero. Therefore, use hd0,0 to refer to the first partition on the first disk, hd1,0 for the first partition on the second disk, etc.
- you install it with grub-install, then use grub-mkconfig -o /boot/grub/grub.cfg to generate a config file
- Configuration
Directive Meaning default= Specifies the title to attempt to boot by default after the timeout number of seconds has passed. fallback= Specifies the title to attempt to boot if the default title fails to boot successfully. timeout= Specifies the number of seconds to wait before automatically attempting to boot the default title. splashimage= Specify a background graphic that appears behind the text of the menu. hiddenmenu Prevents GRUB Legacy from displaying all but the default bootable title until the user presses a key. If the user presses a key, then all titles are displayed. title The title directive starts a new block of directives that form the directives necessary to boot the system. A title block ends when the next title directive appears or when the end of the file is reached. root Uses the special hard disk syntax to refer to the location of the /boot directory. kernel This line specifies the kernel image file, followed by all the parameters that are passed to the kernel, such as ro for read-only and root=/path/to/rootfs. initrd This line should specify an initial ramdisk that matches the version and release of the Linux kernel. It povides a minimal filesystem dureing kernel initialization prior to mounting the root filesystem password It sets a password to access grub if used globally and if used on a title it sets a password before booting that title option rootnoverify This directive is used to specify a bootable partition for a non-Linux operating system. chainloader Used to specify a path to another bootloader or +1 if the bootloader is located in the first sector of the partition specified by the rootnoverify directive. You can use an encrypted password with the password –md5 $1$D20Ia1$iN6djlheGF0NQoyerYgpp/ option (this is just an example)
To generate the encrypted password, use the grub-md5-crypt command or execute the grub command and then at the grub prompt, type the md5crypt command
A runlevel is a status that defines how many services are currently running on a system. There are multiple runlevels, and they are based on what services will be active when the system is booted. For this next example, we will introduce the single user runlevel. The single user runlevel is when a system has limited services running and is used only to perform administrative tasks. To enter singleuser, edit the configs and add “sigle” at the end
- Grub 2
- If GRUB 2 needs to be installed or reinstalled, then an administrator would execute: /sbin/grub2-install /dev/sda
- After installing GRUB 2, the configuration file needs to be generated for the first time. In a Fedora-based distribution, an administrator would execute: grub2-mkconfig -o /boot/grub2/grub.cfg
1.2.17. Runlevels
| Runlevel | Purpose | systemd Target |
|---|---|---|
| 0 | Halt or shut off the system | poweroff.target |
| 1 | Single-user mode for administrative tasks | rescue.target |
| 2 | Multi-user mode without configured network interfaces or network services | multi-user.target |
| 3 | Normal startup of the system | multi-user.target |
| 4 | User-definable | multi-user.target |
| 5 | Start the system normally with a graphical display manager | graphical.target |
| 6 | Restart the system | reboot.target |
- Systems using traditional init can specify the default runlevel by modifying the /etc/inittab file entry that looks like the following: ’id:5:initdefault’ (without the ’ symbol): In this example, the default runlevel indicated is for the system to go to runlevel 5, which is typical for a desktop or laptop system that will be running a GUI, and will, most likely, be used by an end user. For most Linux systems, runlevel 5 provides the highest level of functionality, including providing a GUI interface.
- Servers typically don’t offer a GUI interface, so the initdefault entry might look like: ’id:3:initdefault’ (without the ’ symbol):
- systemd doesn’t natively use runlevels, but it has something similar called targets. For example, the graphical.target is similar to the standard runlevel 5, where the GUI is running; the multi-user.target is similar to the standard runlevel 3, where the system is normally running without a GUI.
To set a default target, create a symbolic link from the target definition found in the /lib/systemd directory to the /etc/systemd/system/default.target file. This file is a symbolic link that controls where the system first boots into.
- the “runlevel” command and the who -r command with tell you the surrent runlevel you are on, if you instead are using systemd you can get it with systemctl get-default
- To specify a different runlevel at boot time on a system that uses systemd, append to the kernel parameters an option with the following syntax where DESIRED.TARGET is one of the systemd targets: systemd.unit=DESIRED.TARGET
- The root user can also change runlevels while the operating system is running by using several commands, including the init and telinit commands, which allow the desired runlevel to be specified. There are also several commands that don’t directly specify the runlevel number but are designed to make the system change runlevels.
- The init and telinit Commands
- To directly specify the runlevel to go to, either use init or telinit. The telinit command in some distributions has a -t option, which allows for a time delay in seconds to be specified; otherwise, the init and telinit commands are functionally identical. In fact, on some systems, the telinit command may be simply a link to the init command.
- To use these commands, simply specify the desired runlevel as an argument. For example, to reboot the system, use either the init 6 command or the telinit 6 command. Or, to go to runlevel 5, use either init 5 or telinit 5.
- With the systemd replacement for init, the init command can still be used to modify the runlevel; systemd will translate the desired runlevel to a target. For example, if init 5 is executed, then systemd would attempt to change to the graphical.target state.
- To have systemd natively switch to a target state, with root privileges execute: systemctl isolate DESIRED.TARGET
- The wall command
- There are instances when the notification may not require the imminent shutdown of the system. This is what the wall command is used for. The wall command can be used to display a message or the contents of a file to all users on the system. For example, the following message is being piped to the wall command from the echo command: echo -e “MESSAGE” | wall
- The wall command accepts standard input or the name of a file. To display a file, the wall command either requires the user to have root privileges or the contents to be piped in from another command, such as the cat command. Without either, the wall command will display an error message
- The -n option can be used by the wall command to suppress the leading banner
- Managing system services
- For example, on a Red Hat Enterprise Linux distribution, the script to manage the web server has a path name of /etc/rc.d/init.d/httpd. So, to manually start the web server, you would execute the following command as the root user: /etc/rc.d/init.d/httpd start
To manually stop a running web server, execute: /etc/rc.d/init.d/httpd stop
Instead of using the full path some distros have a script called “service” that does everything for you
Argument Function start If the service is not running, then attempt to start it. stop If the service is running, then attempt to stop it. restart Stop and then start the service over. If a major configuration change is made to a service, it may have to be restarted to make the change effective. condrestart Restart the service on the condition that it is currently running. try-restart Same as condrestart. reload Read and load the configuration for the service. Reloading the configuration file of a service is normally a less disruptive way to make configuration changes to a service effective status Show whether the service is stopped or the process id (PID) if the service is running. Note: It is also possible to use the service –status-all command to see the status of all daemons. fullstatus For the Apache web server, displays the URL /server-status. graceful For the Apache web server, it gracefully restarts the server. If the server is not running, then it is started. Unlike a normal restart, open connections are not aborted. help Displays the usage of the script. configtest Checks the configuration files for correctness. For some services, if the configuration file is modified, then this can be used to verify that the changes have no syntax errors. - Runlevel Directories
- With the traditional init process, specific directories are used to manage which services will be automatically started or stopped at different runlevels. In many Linux distributions, these directories all exist within the /etc directory and have the following path names: rc0.d, rc1.d, rc2.d … rc6.d
- The number in the directory name represents the runlevel that it manages; for example, rc0.d is for runlevel 0 and rc1.d is for runlevel 1. To demonstrate, the directories that are used to manage which services will be automatically started or stopped at different runlevels in our VM can be found in the /etc directory
- To have a service started in a runlevel, a symbolic link to the init script in the /etc/rc.d/init.d directory can be created in the appropriate runlevel directory. This link name must start with the letter S, followed by a number from one to ninety-nine, and the name of the init script that it is linked to.
- If you want to kill a process when you enter a runlevel just have to modify the “S” to a “K”, that will do the trick
- So, what number is supposed to be provided to a specific script for S and K? Look at the script itself for the line that contains chkconfig: grep chkconfig /etc/init.d/httpd will output: chkconfig: - 85 15
- The second to last number 85 of the chkconfig line is the S number to place on this script, the last number 15 is the K number. You won’t really have to do this anymore but it’s good to know nonetheless
- The chkconfig Command
- The chkconfig command can be used to view what services will be started for different runlevels. This command can also be used to turn on or turn off a service for specific runlevels. On Linux distributions that are not Red Hat-derived, this tool may not be available.
- To view all the services that are set to start or stop automatically, the administrator can execute the chkconfig –list command and the output would look something like the following (although there would be many more lines of output)
- chkconfig can be also used to start and stop services for runlevels: chkconfig httpd on
- In the /etc/rc.d/init.d/httpd script, there is a line that contains the following: chkconfig: - 85 15
The - indicates that the service is not enabled in any runlevels automatically when it is first added to chkconfig management. In other words, this service is not set to start automatically unless an administrator uses the chkconfig httpd on command.
- Some scripts have a different chkconfig value; for example, the etc/rc.d/init.d/atd script has the following line: chkconfig: 345 95 5
- To turn on or off services for a non-default level, the –level option can be used with the chkconfig command. For example, the following two commands would ensure that the atd service was available in runlevels 2 and 4, but not available in runlevels 3 and 5: chkconfig –level 24 atd on
- use chkconfig –add SERVICE or chkconfig –del SERVICE
- The /etc/init Directory
- If an administrator wants to change the runlevels of a service, the configuration file for that service can be modified in the /etc/init directory. For example, in an installation of Ubuntu which includes the Apache web server, this directory normally contains the /etc/init/apache2.conf Upstart configuration file. Within the /etc/init/apache2.conf file should be two lines which define the runlevels to start and stop the server: start on runlevel [2345], stop on runlevel [!2345]
- In this case, the service would be started up in runlevels 2 through 5 and would be stopped in runlevels that are not 2 through 5 because the ! character indicates “not these”.
- To disable a service without uninstalling it, an override file can be created in the /etc/init directory. This file should have the same name as the service configuration file, but ending in .override instead of .conf. This is the preferred technique over commenting out the “start on” lines.
- The contents of the .override file should simply be the word manual, which means that the service will ignore any “start on” lines from the configuration file. For example, to override the apache2 configuration file and disable the web server, execute the following command: sudo ’echo manual > /etc/init/apache2.override’
- The systemctl Command
- The systemctl command looks in the /usr/lib/systemd directory for information about which symbolic link enables a specific service. This directory is where a service’s files are originally placed when it is installed.
- It is also possible to edit service files in order to modify the service; however, these changes should be made to service files found in the /etc/systemd directory instead.
- To manually control the state of a service, use the systemctl command to start, stop, or check the status of that service.
- To view the status of all services: systemctl -a
- systemctl isolate DESIRED.TARGET
- The systemctl command can also manage the low or no power states of the system with command lines such as: systemctl hibernate, systemctl suspend, systemctl poweroff, systemctl reboot
- When enabling a service with systemd by executing a command such as the following, a symbolic link is created within the target level that “wants” to have that service running: systemctl enable named.service
- In this example, the previous systemctl command runs the following command: ln -s /usr/lib/systemd/system/named.service //etc/systemd/system/mulit-user.target.wants/
- The reason that multi-user.target wants the named.service to be running is based on a line within the named.service file that contains the following: WantedBy=multi-user.target
- For example, if the line for the named.service in the /usr/lib/systemd/system/named.service file is updated to be the following: WantedBy=graphical.target
- Then, after executing the systemctl disable named.service and systemctl enable named.service commands, the link to start the named service is created in the /etc/systemd/system/graphical.target.wants directory and the service will be started when the system is going to the graphical.target instead of the multi-user.target.
- Use systemctl list-dependencies graphical.target to see a list of all the services enabled in which target
- If you need to set the system to boot into single-user mode for troubleshooting or recovery operations, use the systemctl enable rescue.target command, followed by systemctl set-default rescue.target command
- To change the system to graphical mode after booting, use the systemctl isolate graphical.target command
- acpid
- Linux systems use the Advanced Configuration and Power Interface (ACPI) event daemon acpid to notify user-space programs of ACPI events. The ACPI allows the kernel to configure hardware components and manage the system’s power settings, such as battery status monitoring, temperature, and more.
- One example of using acpid for power management would be having the system shut down after the user presses the power button. On modern systems, acpid is normally started as a background process during bootup and opens an event file in the /proc/acpi directory.
- When the kernel sends out an ACPI event, acpid will determine the next steps based on rules defined in configuration files in the /etc/acpi directory. Administrators can create rules scripts in the /etc/acpi directory to control the actions taken by the system.
There are many options available to the acpi command to display various information for power management. The table below summarizes some of the options available to the acpi command:
Option Purpose –battery Displays battery information –ac-adapter Displays ac adapter information –thermal Displays thermal information –cooling Displays cooling device information –show-empty Displays non-operational devices –fahrenheit Uses Fahrenheit as the temperature unit instead of the default, Celsius –details Displays additional details if they are available; battery capacity and temperature trip points
1.2.18. Partitioning
- Filesystem types
Type Name Advantages Disadvantages ext2 Second Extended Filesystem Works well with small and ssd filesystems No journaling, if power cuts you can lose data ext3 Third Extended Filesystem Can convert ext2 with no data loss and it has journaling Writes more to the disk because of journaling, it’s slower and it doesn’t support larger filesystems ext4 Fourth Extended Filesystem supports very large disk volumes, can operate without journaling no big improvement over ext3, no dynamic inode creation xfs Extents Filessystem Works very efficiently with large files, compatible with IRIX os, default for rhel The filesystem can’t be shrunk vfat File Alloction Table Supported by almost all OSes Doesn’t support large disks. It’s Microsoft property iso ISO 9660 A standard for optical disc media that is supported by all OSes Multiple levels and extensions complicate compatibilty. Not deisgned for rewriteable media. udf Universal Disc Format Designed to replace ISO and adopted as the standard for DVDs Write support is limited to support revision 2.01 of the standard - Journaling
- A journaling filesystem is very useful in recovery corrupted filesystems and reduces hard drive writes
- youo can use he fsck command to recover a corrupted filesystem
- Filesystem Components
- Superblock - The are that’s in the beginning of the filesystem, it contains critical information about the filesystem, like the size of it, the type, and whch data blocks (where file data is store) are available. With a corrupted superblock the system cannot boot.
- Group Block - The filesystem is divided in smaller sections called group. The group block holds information about each group. Every group block has a copy of the superblock
- Inode Table - Each file is assigned a unique inode number for the filesystem. This inode number is associated with a table that stores the file’s metadata
- Disk quota
- These are used to limit the space that can be used in a filesystem very useful in a home directory partition
- Creating Partitions
- fdisk
- fdisk -l to list partitions
- fidsk -u to list the starting and the endig sectors of a partition
- fdisk /dev/sdax to enter a partition with fdisk
- press m for help
- a to toggle a bootable flag
- with b use network related stuff
- c toggle dos compatibility flag
- d delete partition
- F list free unpartitioned space
- n add a new partition
- p print the partition table
- t change partition type
- v verify partition table
- i print information about a partition
- I to load a script with sfdisk
- O to make a script with sfdisk
- w write the table
- q quit without saving
- G to make a GPT partition
- l to list all the partition types with their codes
- Actual steps for creating a partition
- The current partition table is displayed with the p command.
- The n command indicates a new partition is being created.
- The user enters p to create a primary partition.
- The partition is assigned as number 3.
- The default value for the first sector is chosen by pressing the Enter key.
- For the size, the user chooses +100M for a one-hundred-megabyte partition
- sfdisk
- used for scripting
- make table backups for partitions
- d to do a backup
- f to restore the backup
- s to list the drives
- gdisk
- it’s basically fdisk but for GPT
- ? for help
- v to verify the table
- o to create a new empty partition, and it verifies that you want to delete existing partitions before proceeding
*NOTE* fdisk and gdisk are /destructive/ partitioners, parted and gparted are /non-destructive/ partitioners
- Parted
- parted -h for help
- mklable to create a disklabel (partition table)
- parted /dev/sdb mkpart primary 0% 50% to create a primary partition that fills 50% of the disk
- LVM
- use pvcreate to convert the added storage into physiscal volumes
- Use vgcreate to incorporate all of the desired physical volumes into a virtual collection called a volume group. The volume group now will act as a multi-disk equivalent of a physical volume on which partitioning can occur.
Use lvcreate to create the LVM version of disk partitions (called logical volumes) in the volume group created previously. The logical volumes act like partitions in that the user can create filesystems on them, mount them, and in general use them as a traditional partition.
*Example:* You get 3 hard drives and make them into physical volumes: pvcreate /dev/sdb pvcreate /dev/sdc pvcreate /dev/sdd
You can merge them into a volume group like this: vgcreate vol1 /dev/sdb /dev/sdc /dev/sdd
Create a logical volume group (lvm) with: lvcreate -L 200M -n logicalvol1 vol1 -L is used to give it a size -n is used to give it a name
- mkfs
- mkfs -t **name of the filesystem**
- -b to specify the block size, it’s usually not needed
- -N to manipulate the inodes
- -m to specify the system reserved space
- exFAT
- used for usb storage, microsoft garbagio
- swap
- you can make a partition in fdisk and then use mkswap to convert it into swap space and the swapon to mount it to the swap partition
- -s will display currecnly used swap space
- you can make a swap file with dd: dd if=/dev/zero of=/path-to-swapfile/ bs=1M count=100
- mkswap and swapon again
- fdisk
1.2.19. Mounting Filesystems
- Mounting of partitions and checking on existing mounts is accomplished with the mount command. When called with no arguments, the mount command shows the currently mounted devices. This can be performed by regular users, not just the root user.
- If the filesystem cannot be detected, then use the -t option to indicate the type of filesystem: mount -t iso9660 /dev/scd0 /mnt
- -o to give it options like ro (read only) or rw (read write)
- be careful to only mount drives to an empty directory
- mount **device** **mountpoint** to mount a device
- umount **device** to unmount a device
- when you want to unmount a drive it will usually be busy, use lsof and fuser to get past this
- lsof | grep /mnt to see what’s using the /mnt
- The -v option to the fuser command produces slightly more output, including the name of the user who is running the process and a code which indicates the way that a process is using the directory.
- to terminate a process use fuser -k /mnt
- fuser -l to see the other kill options
- fuser Access Codes
- c The process is using the mount point or a subdirectory as its current directory.
- e The process is an executable file that resides in the mount point structure.
- f The process has an open file from the mount point structure.
- F The process has an open file from the mount point structure that it is writing to.
- r The process is using the mount point as the root directory.
- m The process is a mmap’ed file or shared library.
1.2.20. Mounting Filesystems Automatically On Boot
Fstab
*THE ORDER IS VERY IMPORTANT*: |Device id|mountpoint|filesystem|Mount options|Dump Field|Filesystem Check Field|
- The UUIDs are used for the device identifier, you can use blkid to see the UUIDsw
- you can also use labels as a device indentifier, use e2lable to assign and see he labels of a device
- for xfs filesystems use xfsadmin -L to set labels
- to use labels use LABEL=“label”
- The dump field is there to see what drives are going to show up to the dump command (a backup utility that isn’t used anymore)
- The fileystem check filed is used to determine the order in which fsck will check he filesystems, usually the root one will be the first with 1 and the other ones will be 2, swap and network drives will have 0 so they won’t be checked
- There is also a valuable mount option that will never be specified in the /etc/fstab file: the remount option. This option is useful for changing a mounted filesystem’s options without unmounting the filesystem itself: mount /home -o remount, noatime
- if you modified the option in fstab, for them to take effect use: mount -o remount /mnt
- remounting is different from a umount/mount combo because it doesn’t umount the filesystem, it just changes the mount options
- Mount Options
- rw
- suid allow suid executes
- dev allow device files
- exec allow exec files
- auto automatically mount
- nouser Prevent ordinary users from mounting or unmounting the partition. Using the user option allows non-root users to mount that particular device, which is helpful for removable media.
- async All writes should be asynchronous
- relatime Only update access time on file access if the file has been modified or its metadata changed since last access
Systemd-mount
A systemd mount file looks like this:
[Unit]
Description=Mount unit for core, revision 6673
Before=snapd.service
[Mount]
What=/var/lib/snapd/snaps/core6673.snap
Where=/snap/core/6673
Type=squashfs
Options=nodev,ro,x-gdu.hide
[Install]
WantedBy=multi-user.target
- Breakdown
- Description shows which mount unit is going to be used
- Before shows the snapd.service file as the file name to be mounted.
- What the file to be mounted including its path. Note that an absolute path must be used in the name of the mount unit which it controls.
- Where the location where it will be mounted
- Type the filesystem type that the file is stored in. In the example above, this is squashfs, the compressed, read-only filesystem.
- Options defines the specific options to be used. The nodev option is a security feature that prevents block special devices (which could allow harmful code to be run) from being mounted on the filesystem. The ro option stands for read-only and the x-gdu.hide option prevents the snap mount from being visible to System Monitor.
- Wantedby tells systemd that this filesystem is to be used by the multi-user boot target.
Fstab is still more prefered, but its good to know this method because it allows for more flexibility
1.2.21. The loop option
- The loop option to the mount command is used to mount special filesystems that are stored within a file. These files have the extension of .img or .iso
mount -o loop image.iso /mnt
1.2.22. df
- df is used to manage the flesystem, it shows useful info, like the space that is used and available
- use -h to get aa human readable form
- use -T to show the filesystem type
- use -i to show the number of inodes
1.2.23. Filesystem Issues
- Always shutdown your computer properly
- you can use init 0 to shut down the system
- you can use init 6 to restart the system
- same with systemd (reboot and poweroff)
1.2.24. du
- du shows all the files on the system and the data blocks that they are using
- it’s useful for seeing what’staking up the most space
- it’s commonly used like this: du | sort -n | tail -10
- use -h to make it human readable
- use -s to output only a summary of a directory
- –max-depth will limit the ammount of subdirectories that du will search into
- –exclude will exclude certain directories
1.2.25. tune2fs
- is a command that is used to adjust some parameters on ext partitions
- you should make a backup before using this command
- tune2fs -c0 -i0 /dev/sdb1 to disable automatic system checks
- -c modifies the maximum ammount of times the filesystem can be mounted before needing a check
- -i modifies the maximum ammount of days the filesystem can go on before needing a check
- -o modifies the default mounting options of the filesystem
- -l to list the superblock information (basically a fuck ton of filesystem info)
- -J will create a journal file for ext2 allowing it to be mounted as an ext3 or ext4 partition
- -m modifies the space to be reserved for the root user
1.2.26. Fixing the Filesystem
- fsck
- -t to specify the filesystem type (usually not needed)
- never run it if the filesystem is mounted
- -y to automatically answer yes to the promts
- to force a system check at boot use sudo touch /forcefsck, this will force a fsck on boot on all non-zero filesystems in fstab
- if the filesystem unmounted without a problem fsck will not perform a check and will set it to clean
- -f to force a check
- e2fsck
- is a checker that is called by fsck for checking the ext filesystems
- man e2fsck will give you more information about the command, this might come in handy
- dumpe2fs | grep superblock to see the location of the bacup superblocks
- -b to specify the exact location of the backup superblock: e2fsck -b 8193 /dev/sdb1
- -b command works just fine with fsck as well
- -n will respond to every prompt with no
- lost+found
- are files that are “lost” after a fsck, these files have corrupted filenames and can’t be put in a normal directory, so instead they use the inode number as a filename
- Reparing xfs filesystems
- it’s different from other filesystems
- xfs heavily relies on the journal in order to function
- xfscopy will create an exact copy of the filesystem
- to initiate a repair use xfsrepair
- you need to unmount the filesystem before performing the repair
- if the journal gets corrupted, you can zero it out by using hte -L command, however this is only used as a last resort
- xfsfsr reorganizes the files
- the frag command can be used to report the state of fragmentation, if the fragmentation is above 25% then the reorganization will have a better effect
- to run the xfsfsr command for a limited period of time use -t (and set the time in seconds)
- xfsdb
- xfsdb is used to perform manual repair
- The command it used to perform dubugging options and possible repairs to the filesystems, but if used incorectly, it can make it unrecoverable
- to use xfsdb in expert mode use xfsdb -x (this can modify data structures, which is dangerous!)
- -r to reorganize the files
1.2.27. Managing Shared Libraries
- Shared libraries, also known as shared objects or system libraries, are files that include the .so extension as part of their name.
- By placing code that is used by many programs into library files that can be shared, each program file can be smaller, the programs can use a more consistent base of code, and less disk space overall is consumed.
- When a program is executed, the /lib/ld-linux.so dynamic linker will find and load the shared libraries needed by a program, prepare the program to execute, and then run it.
- Older binaries in the a.out format are linked and loaded by the /lib/ld.so program.
- Both programs will search for the libraries in the /lib directory, the /usr/lib directory, the directories listed in the LDLIBRARYPATH environment variable, and from the /etc/ld.so.cache cache file.
- /etc/ld.so.conf file
- is used to configure which directories are searched for library files by the ldconfig command during the boot process or when executed by the administrator.
- The ldconfig command creates links and caches the most recent shared libraries that are required by programs installed on the system.
- The /etc/ld.so.conf file contains the following content: include ld.so.conf.d/*conf
- Instead of using the single file to contain a list of all the directories to search, the /etc/ld.so.conf.d directory contains *.conf files, which specify the library directories.
- programs can add their own paths to the libraries int he /etc/ld.so.cond.d directories
- As an example of a package that uses shared libraries, consider the mysql-libs package. This package installs a /etc/ld.so.conf.d/mysql-i386 file, which contains the following: /usr/lib/mysql
- Manually Adding Librabry Files
- If an administrator is compiling software from the source or using software that is not packaged, a .conf file needs to be manually created. For example, the administrator may have downloaded and installed software that was not packaged in an .rpm or a .deb file and then installed it in directories under the /usr/local directory structure, with its library files located under the /usr/local/lib directory structure. In order for these library files to be able to be loaded, create a /etc/ld.so.conf.d/local.conf file with the following content: /usr/local/lib
- After adding or removing files in the /etc/ld.so.conf.d directory, the administrator needs to execute the ldconfig command to update the /etc/ld.so.cache cache file.
- To display the name and path information for all the libraries that have been added to the cache, use the ldconfig command with the -p option
- The output above shows how many libraries are configured in the cache, and also displays the library names and paths where they were found when added to the cache. Due to the very large number of libraries that a typical Linux system has installed, the examples are limited by using the head command.
- To display the list of library directories that are configured as well as their contents, the -v option can be used
- you can look at the man command for more info
- LDLIBRARYPATH
- Users without administrative access to the system can also configure directories that will be searched by setting the LDLIBRARYPATH environment variable with the list of library directories. For example, if the user jose installed a program in the /home/jose/app directory with the library files for that application in the /home/jose/app/lib directory, then that user could execute: export LDLIBRARYPATH=/home/jose/app/lib
- ldd Command
- To verify or view the library files associated with a program, use the ldd command. For example, to view the libraries that are used by the /bin/bash executable, execute the ldd /bin/bash command.
- If there is a problem with a library file not being loaded, then the line of the output may report not found. For example, if the mysq1-libs package library files were not correctly configured, then executing the 1dd /usr/bin/mysq1 command may display the following error output: ldd: /usr/bin/mysql: No such file or directory
1.2.28. Package management
- RPM
- to query a package that is not install used -p file
- to query an installed package just use the package name
- to query a package you must always use the -q option with another suffix after
- rpm -qi pkg will shw basic info about that package
- to query the integrity of a package you need to first import the public keys of the dstribution (rpm –import /etc/pki/rpm-gpg/* for rhel) and then use the -K flag (rpm -qpK package)
- to remove a package use rpm -e
- to update a package use -U or -F, the -U can be used to both update an install a package
- rpm2cpio
- is a command that’s useful when you want to reinstall only a part of the package, not whole one. (this is kinda eol)
- -i Extract
- -m Retain the original modification times on the files
- -u Unconditionally replace any existing files
- -d Create any parent directories of the files contained within the archive
- Example: rpm2cpio telnet-server-0.17-47.el63.1.i686.rpm | cpio -imud
- yum
- yum provides will show you the packge owner
- yum search to search for packages
- yum supports package groups, you can list the with yum grouplist and install them with groupinstall, if you need info about it there’s groupinfo
- you can erase packages with yum erase (or just use remove)
- dpkg
- it’s like rpm for rhel
- -i to install packages
- -r or -P to remove packages, -r removes only the package and -P purges it, removing the configs
- -l to list all the packages installed on the system: - i for installed
- u for unknown
- r for remove
- h for hold
- ii for fully installed
- un for uninstalled
- dpkg -L [pkg] will list the files that a package contains
- dpkg -S /path/to/file will show the provider of the package (the one that installed it)
- dpkg -s will show info abot that package
- you can reconfigure a package with dpkg-reconfigure (seems to be useful)
- apt-cache
- apt-get install
- it’s used to install and update packages, if you only want to update a package use –only-upgrad
- md5sum
- creates a 128-bit hash using the original file
- md5sum [file] > file.md5 to generate a checksum
- you can also used -c to check is the sum the authentic
- it’s not really that secure tho, so it’s not used
- sha256sum
- creates a 256-bit checksum that can be used to verify a file
- sha256sum [file] > file.sha256
- you can also used -c to check is the sum the authentic
- sha512sum
- creates a 512-bit checksum that can be used to verify a file
- sha512sum [file] > file.sha512sum
- you can also used -c to check is the sum the authentic
- zypper
- zypper ref
- zypper se to search for packages
- zypper in to install a package
- zeypper in -f to reinstall and overrite the package
- zypper lr o list repos
- zypper ar to add a repo (after that you need to do a zypper ref)
- zypper list-updates to list the available updates
- zypper up to update packages
2. Linux 2
2.1. Class
2.1.1. Course 1
2.1.2. Course 2
- id command shows some useful info
- ulimit -a will enumerate all the limits of the user
- vim /etc/security/limits to change the limits of the system
- groupmod -n mano manole, this will change the name of the group
2.1.3. Course 3
2.1.4. Course 4
- if [${AGE} -lt 18] -lt means less then, <
- if [${AGE} -gt 18] -gt means greater then, >
- -n means is not null or is not set
- -z means is null and zero length
- -e checks is the variables are equal
- seq is python range
2.1.5. Course 5
- the at command will run a command at a specific time and date
- at can start a script when the load is low and only then
- at 21:00 and then you can write the command
- you can do @reboot to run a script at reboot with cron
- dnf-automatic is a neat tool that you can use to manage updates, you can even split the downloading and installing part, security options, and it can manage error outputs.
2.2. Home
2.2.1. User and System Account Files
- /etc/passwd file
- all users are stored in the /etc/passwd files The format is: **loginID:x:UID:GID:comment:homedirectory:loginshell**
Field Example Significance Login ID sysadmin Login name of the user Password x x indicates that the encrypted password has been stored in /etc/shadow file UID 1001 User ID assigned by the system (this is a non-zero value unless it’s root) GID 1001 Primary Group ID GECOS Sysadmin is the house Just comments Home Directory /home/sysadmin Absolute path to the default home directory of the user Shell /bin/bash Absolute path of the user’s login shell - you can use the chfn command to add some more fancy comments
- access the comments with the finger command (ex: finger sysadmin)
- /etc/shadow file
it contains the encrypted passwords of the users on the system, plus some more options like expiration fields The format is: **loginID:password:lastchg:min:max:warn:inactive:expire**
Field Example Significance Username sysadmin Login name of the user Password /password/ Encrypted password.Empty field means no pass, an “*” or “!” indicates that the acc is inaccessible, an “!” before a pass means the acc is locked Last pass change 16413 Number of days between Jan 1, 1970 and the last pass change Minimum 0 Minimum number of days that the current password can be changed by the user; a value of 0 means no minimum Maximum 99999 Maximum number of days remaining for the password to expire; a value of 99999 means no max pass age Warn 7 Number of days prior to password expiry tha the user is warned Inactive 30 number of days after password is deleted and the user name will become inactive Expire 17000 Date indicating when the password is deleted and the user name will become inactive Reserved field Reserved for future use
- Special Purpose System Accounts
- They are used to manage services
- The /etc/login.defs file contains the min and max for UIDs and GIDs
Parameter Type Meaning PASSMAXDAYS number Maximum number of days a password is valid. A value of 99999 means no maximum PASSMINDAYS number Minimum number of days a password is valid. A value of 0 means no minimum PASSWARNAGE number Number of days before password expiry that a warning message is given - These can be checked and changed with the chage command
- User Accounts
- useradd -D for default options (these settings are located in /etc/default/useradd)
The useradd format is: *useradd -s <Shell> -d <Home directory> -m -k <Skeleton directory> -g <Group> username*
Option Meaning -s User’s default login shell. -d The home directory for the new user. -m Creates the home directory for the user if it does not exist. -k Copy initialization files from an alternative directory. The default is /etc/skel. -g Group name or number of the user. -N Avoids creation of a group with the same name as the user name. An alternative group should be specified with the –g option. - gpasswd –A will make the user a group admin for a group (ex: gpasswd -A hiroi hiroi)
- Skel
- It’s a template for newly created users
- It’s usually /etc/skel but it can be changed
- it’s useful for configs since a lot of companies will have standardized things
Format: *useradd -m -k /etc/skeldev user*
- passwd
- basically there’s only one interesting option, passwd -S
- this will bring info about the user
Format: *sysadmin P 04/24/2019 0 99999 7 -1*
Field Example Meaning User Name sysadmin Name of the user Pass Status P P means usable password, L means locked password, NP means no password Last password change date 03/01/2020 Date when the password was last changed Minimum 0 Minimum number of days that must pass before the current password can be changed by the user Maximum 99999 Maximum number of days remaining for the password to expire Warn 7 Number of days prior to password expiry that the user is warned Inactive -1 Number of days after password expiry that the user account remains active - The root user can enforce a password change at the next login by using: *passwd -e user*
- passwd -l will lock a user (it will put an “”!“ before the encrypted password in /etc/shadow)
- passwd -u will unlock a user (it will do the opposite )
- to remove a password from an account, use passwd -d
- The chage command
- The chage command is used to update information related to password expiration.
- chage -M [number of days] user, will change the number of days between when a user creates a new password and when the user is required to change the password again.
- chage -l to list the configs for a user
- chage -m will change the minimum days that the user will have to wait before changing the password again
- chage -W to change the warning field
- chage -I will offer a grace period for the user after he exceeded the limit
- chage -E “date” will set an expiry date
- The usermod command
- usermod -d to change the home directory of a user
- usermod -L to lock an account
- usermod -U to unlcok an account
- usermod user -e “date” will change the expiry date of a user
- usermod user -g will change the primary group of a user
- usermod -aG to add groups to a user
- usermod -s to change the default shell
- userdel to delete a user
- userdel -r to delete a user and it’s home directory
- userdel -f to force the deletetion even if the user is logged in
- Groups
- the /etc/group file contains the groups on the system
- the format is: *groupname:password:GID:grouplist*
Field Example Significance Group name marketing Name of group Password This field is blank for most of the groups, the password is located in /etc/gshadow, and x means that the password might be located in /etc/gshadow GID 1002 The Group ID Group List Jack,Susan List of the user ids who are members of this group - a user can have multiple groups, but only one primary one.
- getent is a command that will query files based on the user given (ex: getent group hiroi, will bring the entry from /etc/group for hiroi)
- getent is a good way to get a users primary group
- Creating a group
- groupadd programmers will add the group programmers
- groupadd programmers -g 1980 to change the GID
- Modifying a group
- groupmod -g to change the gid
- groupadd -n to change the name of the group
- gpasswd -A sysadmin programmers will give administrative rights to the sysadmin user for the programmers group
- gpasswd will set a new password for a group
- gpasswd -r will remove the password for a group
- Deleting a group
- groupdel programmers will delete the group
- you can’t delete a user’s primary group
- if you delete groups, then it might cause access problems with files that were assigned to that group
2.2.2. Advanced Shell Features
- Variables and Aliases
- A local variable works only in 1 bash instance while an environment one extends to all other commands/programs started by that shell
- the set command will display all variables
- the env command will display only environment variables
- you can use export or declare -x to make an environment variable
- you can use the env command to make temporary environment variables (ex: env TZ=EST date, this will change the timezone just for the command)
- set -o nounset will make any reference to an unset value to output: unbound variable, this is useful for scripts
- you can unset a variable with the unset command
- PS1 variables will change how the prompt looks, ’\w’ will show the current absolute path and the ’\W’ will show the relative path
- $HISTFILE holds the history file location, you can ignore some entries by using the $HISTIGNORE variable
- alias holds all the aliases on the system
- you can use unalias to delete an alias
- Functions
functionname () { command command command } ^^^^ This is a basic function
functionname () { command $1 command command command $2 } ^^^^ This function requires an entry to work, for example here you would have to type functionname example1 example2 for it to work
- Lists
- In the context of the Bash shell, a list is a sequence of commands which are separated by one of the following operators
Operator Meaning ; The commands within the list are executed sequentially, where the shell will execute the first command and wait for it to terminate before executing the next command. & Each command within the list is executed asynchronously within a subshell, or in the background. The shell does not wait for the commands to terminate && This is an AND list, so if the command on the left side of && does execute successfully, then the command on the right side of && will execute OR /sign/ This is an OR list, so if the command on the left side of /sign/ does NOT execute successfully, then the command on the right side of /sign/ is executed. - test -e will verify if a file exists
- Initialization files
File Purpose /etc/profile This file can only be modified by the administrator and will be executed by every user who logs in ~/.bashprofile Each user has their own .bashprofile file in their home directory. The purpose of this file is the same as the /etc/profile, but it’s local for the user ~/.bashrc Each user has their own .bashrc file in their home directory. The purpose of this file is to generate items that need to be created for each shell. /etc/bashrc This is a .bashrc file but for all the users on the system - if you want to source a config file, you can use either the source command or “.”
- ./bashlogout is used to execute commands before exiting the shell
2.2.3. Shell Scripts
- you can do exec > file.txt and all the output of the commands you type will be written to said file. ex: exec > file.txt and then type ls. This will write all all the files in the current directory to file.txt
- with the backquotes (’ ’) you will tell the shell to execute a command instead of just typing it.
- you can pass a file that contains a list of directories to du like this: du -sh $(cat file.txt)
- the /read/ command is used to get info from from the user (basically the input command from python)
- to issue a prompt use read -p “Message here”
The test command is useful in a lot of situations:
| Type | Symbol | Example |
|---|---|---|
| True if length of string is zero | -z | -z string |
| True if length of string is not zero | -n | -n string |
| True if strings are equal | = | string1 = string2 |
| True if strings are not equal | != | string1 != string2 |
| True if integers are equal | -eq | int1 -eq int2 |
| True if integers are not equal | -ne | int1 -ne int2 |
| True if first integer is greater than second integer | -gt | int1 -gt int2 |
| True if first integer is greater than or equal to second integer | -ge | int1 -ge int2 |
| True if first integer is less than second integer | -lt | int1 -lt int2 |
| True if first integer is less than or equal to second integer | -le | int1 -le int2 |
| True if file is a directory | -d | -d file |
| True if file is a plain file | -f | -f file |
| True if file exists | -e | -e file |
| True if file has read permission for current user | -r | -r file |
| True if file has write permission for current user | -w | -w file |
| True if file has execute permission for current user | -x | -x file |
- you can use it like this is an if statement: *if test $var –eq 7*
- or you can do it more efficiently: *if [ $var –eq 7 ]*
- If Statement
The structure is:
if COND then TRUECOMMAND else FALSECOMMAND fi
- you can also use elif to nest if commands after an else statement: if COND1
then TRUE1COMMAND(S) elif COND2 then TRUE2COMMAND(S) else FALSECOMMAND(S) fi
- Test Return Values
- Every command has a return value when it’s executed, if it’s 0 it means it succeeded and if it’s any other it means it failed
- To see the exit status of a command you can call the variable ’$?’ this is very useful in scripts and error management
- You can discard the error by redirecting them to /dev/null (ex: grep sysadmin /etc/shadow 2> /dev/null)
Exercise: *read –p “Enter a directory:” dir* *du -sh $dir >> /tmp/report*
If a user introduces a directory that doesn’t exist, an error will occur and the program will stop, to combat that we have 3 solutions that we may apply:
*SOLUTION 1*
- Check if the $? variable is 0 or not
#!/bin/bash
read -p “Enter a directory: ” dir start=$(date) echo “Document directory usage report” > /tmp/report du -sh $dir >> /tmp/report 2> /dev/null if [ $? -eq 0 ] then echo “Start of report: $start” >> /tmp/report echo “End of report: $(date)” >> /tmp/report else echo “Error, $dir could not be accessed” echo “Error: no report generated. $dir not accessible” >> /tmp/report fi
- The if statement will check the $? variable and if it’s 0 it will continue to execute the program and if not it will inform the user of the problem, you can also go nerd mode and do this case specific but yeah.
*SOLUTION 2*
- Incorporate the du command in the if statement
#!/bin/bash
read -p “Enter a directory: ” dir start=$(date) echo “Document directory usage report” > /tmp/report if du -sh $dir >> /tmp/report 2> /dev/null then echo “Start of report: $start” >> /tmp/report echo “End of report: $(date)” >> /tmp/report else echo “Error, $dir could not be accessed” echo “Error: no report generated. $dir not accessible” >> /tmp/report fi
*SOLUTION 3*
- In the next example, the value of the variable is checked before attempting to run the du command. This requires three checks:
- Is the value of the variable a directory?
- Does the value of the variable have read permission?
- Does the value of the variable have execute permission?
In order to check all three of these conditions, the –a option (and) is used:
#!/bin/bash
start=$(date) echo “Document directory usage report” > /tmp/report read -p “Enter a directory: ” dir
if [ -d $dir -a -r $dir -a -x $dir ] then du -sh $dir >> /tmp/report 2> /dev/null echo “Start of report: $start” >> /tmp/report echo “End of report: $(date)” >> /tmp/report else echo “Error, $dir could not be accessed” echo “Error: no report generated. $dir not accessible” >> /tmp/report fi
- I like this one more because it’s specific and gets things done straight from the beginning
- Also –o can be substituted for or and the exclamation point ! character can be substituted for not [ $USER = “joe” –o $USER = “ted” ]
- Another example: The following returns true if $FILE is not readable: [ ! –r $FILE ]
- Verify User Input
- Dealing with user error is as annoying as you can imagine, so you have to find ways around it and predict the actions of stupid people
- For example you can use *grep -E “^[0-9]+$”* to check if a variable is only numeric or not, you can do it with letters as well.
Example:
read -p “Enter your ZIP code: ” zip (My man types in abcxyz)
if echo \(zip | grep -E '^[0-9]+\)’ > /dev/null 2>/dev/null then echo “thank you for the proper zip code” else echo “incorrect zip code” fi
- This will tell the idiot that the zip code can’t be abcxyz
- To make the pattern as precise as possible, in this case, because a standard United States ZIP code must be only 5 digits long, the following pattern would be more precise: *echo \(zip | grep –E ‘^[0-9]{5}+\)’*
- Or, for a modern United States ZIP code, which is 5 digits followed by a dash and four more digits: *echo \(zip | grep –E ‘^[0-9]{5}-[0-9]{4}\)’*
- While Statements
- The while statement is used to determine if a condition is true or false; if it is true, then a series of actions take place, and the condition is checked again. If the condition is false, then no action takes place, and the program continues.
- The structure is: *while* … *done*
- For Statements
- The for statement is extremely valuable when you want to perform an operation on multiple items.
Structure:
for name in listofvalues do FORCOMMANDS done
Example:
#!/bin/bash echo -e “User\tPassword” > /root/password echo -e “----\t---–—” >> /root/password
for name in `cat /root/users` do useradd \(name pw=\)(tr -dc A-Za-z0-9_ < /dev/urandom | head -c 12 | xargs) echo -e “$name:\t$pw” >> /root/password echo $pw | passwd –stdin $name chage -M 90 -m 5 -W 10 $name done
- tr -dc A-Za-z0-9_ < /dev/urandom | head -c 12 | xargs generates random passwords
- seq Statement
- it generates numbers, that’s pretty much it
- seq 1 20 will count to 20
- seq 0 10 100 will count from 10 to 20 to 100
- seq 12 -3 -12 will create a sequence that starts at 12, and decreases by 3 until it reaches -12
2.2.4. X Window
- Switching display managers
- in debian you can use sudo dpkg-reconfigure *package name* to change the default display manager
- in red-hat distros you have to change the /etc/sysconfig/desktop file. To use gdm and gnome it should look like this: DESKTOP=“GNOME” DISPLAYMANAGER=“GNOME”
- To use the kdm display manager and the KDE desktop environment, this file should be set to contain the following entries: DESKTOP=“KDE” DISPLAYMANAGER=“KDE”
- Change the Display Manager Greeting
- with gnome you can use the gconftool-2 command:
- These commands will change the banner to LPIC students only on red-hat systems init 3 su -s /bin/sh gdm gconftool-2 –direct –config-source=xml:readwrite:$HOME/.gconf –type bool –set /apps/gdm/simple-greeter/bannermessageenable true gconftool-2 –direct –config-source=xml:readwrite:$HOME/.gconf\ –type string –set /apps/gdm/simple-greeter/bannermessagetext “LPIC students only” exit init 5
- with kde, you can edit the /etc/kde/kdm/kdmrc file: UseTheme=false GreetString=Message to display in banner
- /etc/X11/xorg.conf File
Section Purpose Files File pathnames for fonts and modules ServerFlags Server flags are global options Module Dynamic module loading of modules that extend the server Extensions Extension enabling for the X11 protocol InputDevice Input device description for keyboard and pointers InputClass Input class description Device Video card device description VideoAdaptor Xv video adaptor description Monitor Monitor description Modes Video modes descriptions Screen Screen configuration ServerLayout Overall layout combining other sections DRI Direct Rendering Infrastructure Vendor Vendor-specific configuration - X -configure will generate a new xorg.conf file
- A basic video driver called vesa can be used to get most gpu working with minimal features: *Section “Device”* *Identifier “Card0” *Driver “vesa”* *EndSection*
- Fonts
- to add a new font drop it in /usr/share/fonts and then use the fc-cache ~//.fonts
2.2.5. Graphical Desktops
- You can use pstree | grep dm to get more info about the DE that you are using
2.2.6. Installing the Desktop Environment
- On Debian: *sudo apt-get install xserver-xorg gnome-session* or *sudo apt-get install xserver-xorg kde-standard*
- On Red-Hat: *yum groupinstall general-desktop* or *yum groupinstall kde-desktop*
2.2.7. Localization
- to change locales after downloading them, use dpkg-reconfigure localepurge on debian
2.2.8. Remote Desktop Environments
- you can pipe graphics through ssh using *DISPLAY=:0 gnome-calculator
- if you want to see gui apps from the host on the client you can use *ssh -X remote@192.168.0.7* and any app you open will be forwarded to you
- For an X server to be able to forward graphical programs to another computer, an authorization file must be referred to. Fortunately, this is taken care of during an ssh connection.
- The program which accomplishes this is called xauth, which generates an MIT Magic Cookie. This cookie is typically stored in a hidden file called .Xauthority. Any computer that has a copy of this cookie is allowed to remotely run graphical programs generated by the X server.
- Using a remote X server
- One of the benefits of the X server is the ability to run a GUI-based program on one system and have its output displayed on another system. To properly set up a server and client, two things must be set correctly in order for this functionality to work: the xhost settings and the DISPLAY environment variable.
- By default, the X server will permit connections from clients that are from the same host (known as localhost). In order to allow a connection from a remote machine, the xhost command can be executed to add a host to be permitted. For example, to permit connections on your local system from two systems, one with a resolvable hostname server1 and the other with an IP address of 192.168.20.30, execute the following command: *xhost +server1 +192.168.20.30*
- To restrict access to the X server for server1, remove the host by using the xhost as follows: *xhost –server1*
- To disable xhost you the command: xhost+
- Connecting to a Remote Display
- The first step in attempting to use an X server remotely will be to permit X connections from a remote host back to the originating X server by adding the host with the xhost command. For example, to connect to a system with a name that can be resolved to a name of centos from the system named server1, execute the following command on server1: *xhost +centos*
- The second step is to make the connection to the remote host with the ssh or telnet command. To connect to the centos system from server1 with the ssh command, execute the following command on server1: *ssh centos*
- The third step would be to set the DISPLAY environment variable. In some cases, the value of the variable may be appropriately set automatically and will not need to be changed. If, however, the DISPLAY variable contains a hostname that does not resolve back to the address of the system where the ssh session originated server1, then it will have to be set. For example, when the following command is executed through the remote connection logged into centos: *echo $DISPLAY*
- If the output of this command was: *localhost.localdomain:0.0*
- This means that the X client should try to use the localhost system’s first display (0.0). Since localhost in this context refers to the host that you have connected to (not the remote system running the X server) this setting will not work correctly. On the other hand, if the output of echo $DISPLAY is as shown below, then GUI-based output will be sent to the remote machine, server1: *server1.test:0.0*
- To set the DISPLAY variable manually, execute the following command: *export DISPLAY=”server1.test:0.0”*
- The final step would be to verify that an X client would be able to connect back to the X server where the ssh session originated. Executing almost any graphical application would work at this step. For example, to execute one of the simpler applications that will display an analog clock, issue the command: xclock
- If a graphical clock appears on the remote X system (server1), then it is working; otherwise, if output appears like below, then there is a problem: *Error: Can’t open display: localhost.localdomain:0.0*
- Keep in mind that even with a correctly configured xhost and DISPLAY variable, a remote GUI-based display may not work. Other issues could be interfering:
- Firewall settings on both systems
- Host access restrictions in the /etc/hosts.allow and /etc/hosts.deny files
- Telnet/SSH server settings on the system that is supposed to accept a login
- X server settings on the system that is supposed to accept a remote X connection; for instance, the /etc/ssh/sshdconfig file
- In order to configure a Red Hat-derived system with a default installation to accept incoming connections from remote systems to the X server, the /etc/gdm/custom.conf file must have the following under the [security] heading: *DisallowTCP=false*
2.2.9. XDMCP
- XDMCP is a protocol built into Xorg, used to share a remote screen, similar to X11-Forwarding over SSH. Unlike X11-Forwarding, the XDMCP protocol is not encrypted. Therefore, XDMCP is not recommended for use in a production environment.
- To use XDMCP, settings need to be added in the display manager. In this case, the following lines would be added to the lightdm configuration file
- After restarting the display server, XDMCP can be seen running on UDP port 177 using either the netstat or ss command.
- Now you can connect with the ip and the port
2.2.10. RDP
- RDP is a proprietary protocol developed by Microsoft™ for use with its Remote Desktop Connection© software included with Windows©. In order to interface with Windows systems, Linux systems are able to use various clients, including rdesktop and xfreerdp.
- Given the username and IP address of the Windows system that is being connected to, use the following command: rdesktop -u USER 192.168.0.1
- Normally, an RDP client would connect to a Windows server, but a compatible program called xrdp is available to use as a server.
- The configuration file for xrdp is located at /etc/xrdp/xrdp.ini. You may want to change the listening port number to something other than 3389 as RDP is a well-known service and often scanned for by malicious actors.
- stat the service and be sure to have the port open
2.2.11. Accessibility
- The K Desktop Environment has a software package named kdeaccessibility which includes the following utilities:
- A screen magnifier called KMagnifier (kmag):
- An automatic clicking tool called KMouseTool (kmousetool):
- A text-to-speech screen reader called Kmouth (kmouth):
- Keyboard Accessibility Settings (In GNOME you can access it in: Applications > System Tools > Settings > Universal Access):
- _Repeat Keys_, If the feature is disabled, then the key will not repeat if held down. If enabled, the settings for Delay and Speed control how long a key must be held down before it will begin to repeat, and the speed that the key will repeat
- _Sticky Keys_, Once Sticky Keys are enabled, it allows the user to perform key combinations such as Ctrl+C without having to hold both keys at once. Instead, the user can press the Ctrl key and then press the C key to effectively type Ctrl+C.
- _Slow Keys_, Slow keys are the opposite of Repeat Keys, so both should not be enabled together. With Slow Keys, it will only accept each keypress if a key is held down for a specific period of time.
- _Bounce Keys_, Bounce Keys help prevent a key from being repeated if it is pressed again too quickly after the first time it is pressed. This can be useful for users whose hands may shake and accidentally cause unintended keypresses.
- _Toggle Keys_, The Toggle Keys feature initially provided audio feedback when keyboard modifier keys, like Shift, Alt, and Ctrl, are pressed. It also can provide feedback when other accessibility features are used.
- _Mouse Keys_, The mouse pointer can be moved around the screen by using the keyboard. Enable this feature by going to the Mouse Keys setting in the Universal Access menu
- Visual Theme Settings:
- Are we really discussing fonts and themes?
- Just go in the gnome tweaks and change the preselected themes and fonts
- Assistive Technologies:
- _Braille Display_, The ability to display text as Braille for blind users is possible with the brltty software package and a refreshable Braille display device. The Orca screen reader also has the capability to enable the output of Braille.
- _Screen Reader_, For the GNOME desktop environment, the Orca screen reader is used as the preferred application.
- _On-Screen Keyboard_, For users who may not be able to type on a normal keyboard, a graphical version of a keyboard can be presented on-screen for the user to use by way of a pointing device or mouse. This is similar to the keyboard that is presented on tablet computers. To enable the on-screen keyboard, click: Applications > System Tools > Settings > Universal Access
- _Text-To-Speech_, Besides the Orca application that reads screen dialogs, there are separate applications which can be used to read aloud the text from a file: Emacspeak, Espeak, Festival
2.2.12. Scheduling Jobs
- Cron
- The way crontab works: *Minute Hour Day-of-month Month Day-of-week command*, every word represents a “*”
The values that can be placed in each of these fields are:
Field Values Minute Range 0 - 59 Hour Range 0 - 23 Day-of-month Range 1 - 31 Month Range 1 - 12 (Jan-Dec) Day-of-week Range 0 - 7 (0 & 7 =Sunday) Command The command to be executed - The first five fields can contain the following, to represent time values:
- a single value
- Multiple values such as 1,3,5 in the fifth field, which would mean Monday, Wednesday, Friday
- A range of values (such as 1 through 5 (1-5)) in the fifth field, which would mean Monday through Friday
- An asterisk * character means any or all values
*Examples:*
- *30 04 1 1 1 /usr/bin/somecommand* = The above entry will run /usr/bin/somecommand at 4:30 am on January 1st, plus every Monday in January.
- *30 04 * * * /usr/bin/somecommand* = The above entry will run /usr/bin/somecommand at 4:30 am on every day of every month.
- *01,31 04,05 1-15 1,6 * /usr/bin/somecommand* = Comma-separated values can be used to run more than one instance of a particular command within a time period. Dash-separated values can be used to run a command continuously. The above entry will run /usr/bin/somecommand at 01 and 31 past the hours of 4:00 am and 5:00 am on the 1st through the 15th of every January and June.
- *00 08-17 * * 1-5 /usr/bin/somecommand* = The above entry will run /usr/bin/somecommand every hour (on the hour) from 8AM to 5PM, Monday through Friday of every month.
- Variables in crontabs
- The crontab file can also contain variables; some of the most commonly-found variables are: *PATH=/usr/local/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin* *SHELL=/bin/bash* *MAILTO=databaseadmin*
- The PATH variable is very important in the event that relative pathnames are used when specifying commands. For example, to execute the date command without including the PATH variable, the full path to the date command /bin/date must be specified.
- Crontab Keywords
- Crontab can have some useful keywords:
Entry Description Equivalent to @reboot Run once, at startup @yearly Run once a year 0 0 1 1 * @annually @monthly Run once a month 0 0 1 * * @weekly Run once a week 0 0 * * 0 @daily Run once a day 0 0 * * * @midnight @hourly Run once an hour 0 * * * * - You can simply use *@daily /home/sysadmin/bin/daily-backup*, much easier
- Maintaining User crontab Files
- The crontab command is used for maintaining user crontab files. The user crontab files are stored in the /var/spool/cron directory.
- use crontab -e to edit the crontab
- you can change the default editor that you use for crontabe (EDITOR=vim)
- crontab -l will list your crontab
- crontab -r will wipe your crontab
- crontab -u username will open the crontab using the user specified
- Maintaining System crontab Files
- The crond daemon reads the system crontab from the /etc/crontab file.
- The crond daemon “wakes up” every minute, examines all of the crontab files that are stored in memory, and determines if there is a particular command or script requiring execution. If such a command is found, the crond daemon launches that process.
- Controlling Access to cron
- The root user always has the ability to execute the crontab command, but this access can be restricted for other users. The two files used for this purpose are the /etc/cron.allow file and the /etc/cron.deny file.
- The following rules are used to determine how these files are used:
- If both the cron.allow and the cron.deny files do not exist, the default on current Ubuntu and other Debian-based systems is to allow all users to use the crontab command.
- If only the cron.allow file exists, then only the users listed in the file can execute the crontab command.
- If only the cron.deny file exists, then all users listed in this file are denied the ability to execute the crontab command, and all other users are allowed to use the crontab command.
- If both the cron.allow and the cron.deny files exist, the cron.allow file applies and the cron.deny file is ignored. As only one of these files should exist, the presence of both files is typically due to a mistake made by the administrator.
- If you create a /etc/cron.allow and leave it empty then noone can access the crontab
- at Command
- The cron command is a good tool for scheduling tasks that are required to run at regular intervals. For scheduling one-time tasks, the at and batch commands are more useful.
- The at command executes commands at a specified time or at a time relative to the current time. The command can be used as follows: *at TIME*
The TIME specifications supported by at are quite extensive and easy to use. Some of the supported formats are mentioned in the table below, assume the current date/time is 1st Mar 2025, 8:00 a.m:
Keyword/Date Format Significance midnight 12:00 a.m. 2nd Mar 2025 noon 12:00 p.m .1st Mar 2025 tomorrow 8:00 a.m. 2nd Mar 2025 next week 8:00 a.m. 8th Mar 2025 1630 4:30 p.m. 1st Mar 2025 4:30 PM Mar 20 4:30 p.m. 20th Mar 2025 now + 2 hours 10:00 a.m .1st Mar 2025 now + 7 days 8:00 a.m .8th Mar 2025 - When the at command is executed, the user is presented with the at> prompt.
- Enter the command that requires execution and then press the Enter key. Another at> prompt will be presented for another command.
- When you are finished entering commands to execute, use Ctrl+d. The End Of Transmission (EOT) message will be displayed, followed by the assigned job number, and the time the job will be executed. To cancel the at job, press Ctrl+c at any time (before scheduling the job with Ctrl+d).
- The at and batch jobs are stored in the /var/spool/at directory.
- atq will list the current active jobs
the output of the command will be something along the lines of this:
Field Example Significance Number 2 Job number allocated to this job Date Mon Feb 10 Date when this job will be executed Hour 09:26:00 Hour when this job will be executed Queue a Name of the queue. Valid queue names are from a-z, with ’a’ being the default queue User sysadmin User name of the user who scheduled this job - atrm will remove a job: *atrm 2*, will remove job nr 2
- to remove a job you either must be it’s owner or root
- Access to the at and batch commands is controlled by the following two files:
- The /etc/at.allow file
- The /etc/at.deny file
- The format of these files is similar to cron.allow and cron.deny files. Both the files contain a list of user names, one on each line. The rules mentioned for cron access are applicable for at access also:
- If both files do not exist, then all regular users are denied the ability to execute the at and batch commands.
- If only the at.allow file exists, then only the users listed in the file can execute the at and batch commands.
- If only the at.deny file exists, then all users listed in this file are denied the ability to execute the at and batch commands, and all other users are allowed to execute the at and batch commands.
- If both the at.allow and the at.deny files exist, the at.allow file applies and the at.deny file is ignored. As only one of these files should exist, the presence of both files is typically due to a mistake made by the administrator.
- Batch command
- Similar to the at command, the batch command is used to schedule one-time tasks. However, instead of specifying an execution time, batch commands are executed as soon as the system’s load average drops below 0.8 (80% of CPU usage). The default value of 0.8 can be changed by using the –l option to the atd command
- The batch command will prompt for command input. A job number is issued upon the successful completion of input. Alternatively, it can read input from a file by using the –f option. For example, to sort the large marketingdata file at the point when the system load average drops below 0.8.
- Systemd Timer Units
- Many modern systems use systemd rather than the much older system daemon init for managing system services. Systemd is a service and system manager originally designed by Red Hat. Systems that have systemd as a replacement for the traditional init process provide an alternative to crond for scheduling jobs and managing services, called systemd timer.
- The creation of cron dates back to 1975, which makes it much older than systemd timers. While cron is more widely-available across systems, including non-Linux systems (such as BSD systems), systemd timers have some advantages from what has been learned from years and years of usage of cron.
- For example, when systemd timers are created, they can be set to run in a specific environment (systemd.exec), whereas cron inherits the environment it is being run from. Because cron is not run from a shell that loads the environment through startup files like .bashrc, errors may occur since variables and file paths may differ from the shell environment that is being used to create the job (i.e., a user’s shell). Another advantage is that with systemd timers, dependencies can be configured for jobs, meaning that scheduled jobs can be set to be dependent on other systemd units. Finally, systemd timer jobs are logged via the systemd journal, providing output in a centralized place and format.
- What is a timer? Timers are essentially files that end in .timer, which fall under a category of files called systemd unit files. These unit files contain information about services, sockets, mount points, timers, devices, and other types of systemd units. The .timer files will contain configuration information about the task to be executed by the systemd timer. Below is an example of a timer file created to display a greeting after the system startup:
*[Unit]* *Description=Displays greeting after boot*
*[Timer]* *OnBootSec=10sec* *Unit=greeting.service*
*[Install]* *WantedBy=multi-user.target*
Section Description [Unit] General information about the systemd unit file. The Description option creates a human-readable name for the systemd unit. [Timer] Contains the options that define when the timer will start and what service to execute. [Install] Contains information about how the unit will be installed. The WantedBy= option creates a symbolic link within the target level that “wants” to have that service running. - Systemd uses two different types of timers; monotonic and realtime. With monotonic timers, the systemd timer allows a job to be executed after an event has occurred. This type can be used to run a job when the system boots (OnBootSec option) or a systemd unit is active (OnActiveSec option). The example greeting.timer file above is a monotonic timer that uses the OnBootSec option to run the timer ten seconds after the system boots
- Realtime timers work like cron and execute a job when a specified time has occurred. To create a realtime timer, the OnCalendar option should be used in the [Timer] section of the .timer file. The format of an OnCalendar time entry is: *DayofWeek Year-Month-Day Hour:Minute:Second*
- The entry below is an example of a timer unit that is executed every day at 9:00 am: OnCalendar=*-**-** 9:00:00
- Similar to the crontab format, the asterisk * character in the example above means any or all values.
- To specify what days of the week to run a timer, use the DayofWeek field. The entry below is an example of a timer unit that is executed every Monday, Wednesday, and Friday at midnight: OnCalendar=Mon,Wed,Fri **-***-* 00:00:00
- To run a timer on a specific day of the month, use the Day field. The following entry will run a timer on the first day of every month at 10 PM: OnCalendar=*-*-01 22:00:00
- The OnCalendar option also uses special expressions such as hourly, daily, monthly, and yearly as time values: OnCalendar=daily
- A .timer unit file should correspond to a systemd service, which is a .service file with the same name. For example, the greeting.timer unit file created above should have a corresponding service file called greeting.service. When the systemd timer is activated, the .service systemd unit is executed.
- *systemctl list-timers* will list all active timers
- *systemctl list-timers –all* will list all timers
- Outside of the .timer files, the systemd-run command can be used to run a transient job, one which does not have a .timer file. The systemd-run command can be used to run a command or execute a systemd .service unit. For example, to execute the touch /home/sysadmin/newfile command one hour after running the systemd-run command, use: *systemd-run –on-active=”1h” /bin/touch /home/sysadmin/newfile*
- This will run the touch command and create the newfile file in the /home/sysadmin directory one hour after running the systemd-run command. To execute a systemd .service unit, use the –unit option: *systemd-run –on-active=”1h” –unit=greeting.service*
- The command above will execute the greeting.service unit one hour after running the systemd-run command.
- After creating a systemd timer via systemd-run, the name of the transient systemd job is returned to you. Executing the systemctl list-timers command will display the name of the systemd transient job listed in the UNIT column.
2.2.13. Localization
- The concept of localization is to make it easy for the administrator or individual users to set and switch their working environment to match conventions specific to a certain language in a certain country (i.e., Canada/English or Canada/French). A user’s locale permits them to interact with system commands, graphical interfaces, and programs naturally without having to translate or convert anything.
- Locale
- The term locale refers to a set of parameters that define the user’s language, country, and any special variant preferences. These parameters include the following:
- Language
- Numeric representation
- Date-and-time representation
- Monetary units and symbols
- Case conversion - for proper case mapping of characters
- String collation - for determining sort order rules for a country
- Character classification - determines the correct set of characters, digits, punctuation, and symbols.
- The term locale refers to a set of parameters that define the user’s language, country, and any special variant preferences. These parameters include the following:
- Localization
- To serve different cultures, a program should be able to determine its locale and act accordingly. Localization is the process of creating or adapting a product to be suitable for a specific group in terms of language, culture, and targeted needs.
- There are two methods of providing locale information:
- Locally-run programs use locale information provided by environment variables.
- Web-based applications use locale information either obtained from the web browser or explicitly requested as a form value.
- Locale Naming Convention
- Locale definition files are used to define the language, territory, and code set information applicable to the user. Locale definition files use the following naming convention: *language[territory][.codeset][@modifiers]*
- The most common character code set used today is UTF-8 (Universal Character Set + Transformation Format - 8-bit) because it contains a universal set of characters plus supports Chinese, Japanese, and Korean double-width characters.
Additional locale definition file examples are shown below:
Locale Description enAU.ISO-8859-1 Language: English; Territory: Australia; Codeset: ISO-8859-1 hiIN.UTF-8 Language: Hindi; Territory: India; Codeset: UTF-8
- Viewing the Current Locale
- The locale command without any arguments gives the summary of each locale category (LCx).
- If the LCALL locale variable is set to enUS locale, all locale environment variables are set to the enUS locale.
- If information about a specific locale category is needed, the category -c option for the locale command can be used: locale -c LCNAME
- Alternatively, the keyword -k option will show you information about a specific keyword. Keywords generally start with LC_ or LANG:
- Listing All the Available Locales
- To display the list of locales that are available on the current system, based on the locale selected during installation, use the locale command with the all -a option. Note that the output may vary between systems as new locales can be installed after the system is deployed.
- Change System’s Default Locale Settings for All Users
- To permanently change the system’s locale, the following steps would need to be performed as root:
- Edit the global locale settings file:
- /etc/default/locale (Debian-based systems)
- /etc/sysconfig/i18n (Red Hat-based systems)
- Change the LANG (language) variable inside the file to the desired value (from the list of available locales): *LANG=“enAU.UTF-8”*
- After rebooting the system, the changes will take effect.
- Verify using the locale command:
- Edit the global locale settings file:
- To permanently change the system’s locale, the following steps would need to be performed as root:
- Changing User Locale for Current Login Session
- To change the language and encoding for the current login session, set the LANG environmental variable to equal one of the available locales. Several examples are shown below:
Locale LANG variable setting English (US) LANG=enUS.UTF-8 Russian LANG=ruRU.UTF-8 French LANG=frFR.ISO-8859-15 - Customizing a User’s Locale
- If a user requires a different locale than the system default, place the following line in one of the bash initialization files (~/.bashrc or ~/.profile): *export LANG=enUS.UTF-8*
- Role of LANG=C
- The LANG environment variable controls localization. It is used to select how your computer treats language-specific features. In turn, it affects the behavior of command line tools like the sort, grep, and awk commands.
- Setting the LANG environment variable value to C tells all programs and tools to consider only basic ASCII characters (0-9, A-Z, special characters) and disable UTF-8 multibyte match. It is also used in scripts to predict program output, which may vary based on the current language. In a way, LANG=C disables localization. To set the LANG environment variable to C, execute the following command: export *LANG=C*
- It is possible to use LANG=C temporarily. The following command temporarily overrides the language for one program, the ls command in this example, and displays all output in English: *LANG=C ls /noexist*
- Character Encoding
- Computers do not understand characters in the same way as humans. They process an English letter, a punctuation mark, or a number as a binary stream of data. Character encoding is the process of maintaining the mapping between the character and its internal value.
- Common examples of character encoding systems are: Morse code, the American Standard Code for Information Interchange (ASCII), and Unicode.
- To determine the current character mapping in Linux, execute the following locale command: *locale charmap*
- To display all available charmaps (character maps) on the system, use the locale command with the -m option.
- ASCII
- ASCII (American Standard Code for Information Interchange) is an encoding that is used to represent English language letters, numbers, symbols, and control codes as a 7-bit binary number. The standard ASCII character set includes 128 characters.
- Out of the total 128 characters, 95 are printable characters, and the remaining 33 are non-printable control characters; these characters include:
- Characters
- Numbers 0-9
- Lowercase letters a-z
- Uppercase letters A-Z
- Punctuation symbols
- Control codes
- Blank space
- Characters
- ASCII’s limited character set proved to be a big constraint in accommodating languages other than English. While it was the most commonly-used character encoding on the web until 2007, it was surpassed by UTF-8, which includes ASCII as a subset.
- To view the ASCII character table, use the ascii command
- Unicode
- Unicode is a standard, designed to assign a unique number to every character of every language (including mathematical and other specialized symbols), regardless of the platform and programs being used.
- Unicode has become the main scheme for internal processing and storage of text in modern computing. Unicode enables users to handle all types of scripts and languages. It also simplifies scientific information exchange by offering a wide-ranging set of mathematical and technical symbols.
- The Linux operating system can utilize the UTF-8 encoding scheme. Unicode characters can be included in filenames, and Unicode strings may be used as command-line parameters. Text editors can be used to display and edit files containing characters in Unicode format.
- UTF-8
- UTF-8 (Unicode Transformation Format – 8-bit) is an encoding that can represent every character in the Unicode character set with 1 to 4 bytes. UTF-8 is backward-compatible with ASCII.
- UTF-8 encoding is widely-used for websites, as well as programming languages, operating systems, and software applications.
- In Linux, UTF-8 mode can be activated by setting the LANG environment variable to the appropriate locale.
- ISO-8859
- ISO/IEC 8859 is a joint International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) series of standards for 8-bit character encodings.
- While the bit patterns of the 95 printable ASCII characters are sufficient to exchange information in modern English, most other languages need additional symbols not covered by ASCII. ISO/IEC 8859 solved this problem by utilizing the eighth bit in an 8-bit byte, which allows an additional 96 printable characters to be accommodated.
- Conversion from One Character Encoding to Another
- The iconv command is the standard application programming interface (API) for converting one character encoding to another. The iconv command is suitable to convert characters in a large number of files.
- The iconv -l command gives the list of supported encodings. (Note that you will likely see a large number of encodings listed.)
- The generic syntax for converting encoding of given files from one encoding to another using iconv is as follows: *iconv –f old-encoding [-t new-encoding] file.txt > newfile.txt*
- For example, the following will convert the file test1.txt from the ISO 8859-1 standard code set to the CP 437 standard code set and store the output in the file named test1convert.txt: *iconv -f ISO8859-1 -t CP437 test1.txt > test1convert.txt*
- Time Zone, Date, and Time
- The time zone for a Linux system is set at the time of installation and does not require changes very often. However, in a multi-user scenario, users may require different time zones.
- Linux (and UNIX) computers keep time in Universal Time (UTC). Since UTC remains constant and is not subject to Daylight Saving Time or other changes, it is useful in synchronizing time across computers and zones. Linux systems internally keep time using a UTC-synchronized clock that is converted to the appropriate local time based upon user preferences.
- The TZ environment variable is used to determine the time zone and how to calculate local time.
- Local Time Zone Configuration File
- The /etc/localtime file is the local time zone configuration file that stores the system-wide time zone of the local system used by applications for presentation to the user. This file contains binary data and is not edited directly.
- It is recommended to back up the file before changing the time zone.
- Checking Current System Time Zone
- On a Debian-derived system, the file /etc/timezone shows the current time zone: *cat /etc/timezone* shows: *Etc/UTC*
- For Red Hat-derived systems, the /etc/sysconfig/clock file is used to indicate the current time zone: *cat /etc/sysconfig/clock* shows: *ZONE=”America/LosAngeles“*
- Time Zone Information Directory
- The /usr/share/zoneinfo directory contains all the time zones supported.
- Changing the Time Zone Using the tzselect Method
- The tzselect utility helps determine the time zone value used to set the TZ variable for a particular location by allowing the user to choose the country and location from a set of menus. Once the tzselect utility returns the desired time zone, the user must set the TZ variable by following the instructions provided by tzselect.
- To make the time zone change permanent for a user, append the following line to the ~/.profile or ~/.bashprofile in the user’s home directory.
- The change will be effective once the user logs out and logs in again. Use the date command to confirm the new time zone setting. Adding the TZ variable to one specific user profile changes the time zone for that user only and does not affect the time zone for other users or the default system time zone.
- Changing the System Time Zone Using the Command Line Method
- To set a new system time zone:
- Find the city that represents the desired time zone (i.e., Australia/Sydney) by either:
- Using the tzselect utility (see the previous section)
- Searching the time zone directory (/usr/share/zoneinfo) by first changing to the appropriate continent or ocean subdirectory (i.e., Australia) then listing the directory contents
- Before changing the time zone, backup the current time zone settings file using the following command: *cp /etc/localtime /etc/localtime.org*
- Create a symbolic link for the machine’s clock to the city in the new time zone: *ln -sf /usr/share/zoneinfo/Australia/Sydney /etc/localtime*
- Verify the new time zone by running the *date* command again:
- Find the city that represents the desired time zone (i.e., Australia/Sydney) by either:
- To set a new system time zone:
- Setting System Date and Time
- On systemd systems, the timedatectl command is used as a replacement for the date command to view and configure system time.
- Method 1
- The root user can use the date command to set the system date and time. *date MMDDhhmmYYYY.ss*
- For example, to set the time and date value to Mar 29, 15:26:07 2020, execute the following command: *date 032915262020.07*
- Method 2
- Use the –s or –set option to set the system date and time using a more user-friendly notation: *date -s “Mon Mar 23 17:00:00 UTC 2020”*
2.2.14. System Time
- Understanding the Clock
- System Clock: This is a clock maintained by the kernel and is interrupt-driven. The value of this clock is initialized from the hardware clock at boot time. The system time is calculated as the number of seconds since January 1st 1970 00:00:00. (This reference time is known as epoch time or sometimes UNIX time.) The system clock contains the current time as well as time zone information.
- Hardware Clock: This is a battery-powered clock that keeps time even when the system is shut down. When the system boots, the system clock is set using the value of the hardware clock. When the system is shut down, the hardware clock is set to the value of the system clock. This ensures that both the clocks are synchronized. The hardware clock is also known as the real time clock (RTC) or the CMOS/BIOS clock. The hardware clock stores the following values: year, month, day, hour, minute, and seconds.
- Maintaining the Hardware Clock
- The hwclock (hardware clock) command is used by the root user to update and query the hardware clock. The command accesses the hardware clock by performing Input/Output (I/O) via the /dev/rtc device file.
- To view the time of the hardware clock, execute the following command as root: *hwclock*
- To set the value of the hardware clock, execute the following command as root: *hwclock –set –date “1/1/2025 18:30:50”*
- The time specified must be the local time (current time zone)!
- Sometimes the values for the system and the hardware clock might be different, in this case you you have to choose one of the following solutions:
- To set the hardware clock from the current system time, execute either of the following commands: *hwclock -w* or *hwclock –systohc*
- To set the system time from the hardware clock, execute either of the following commands: *hwclock -s* or *hwclock –hctosys*
- To view both hardware and system clocks at the same time, use: *hwclock -r; date*
- To specify the UTC or local time format of the hardware clock, use the –utc or –localtime options: *hwclock –set –date “1/1/2025 18:30:50” –utc* or *hwclock –set –date “1/1/2025 18:30:50” –localtime*
- If neither of the options is specified, then the setting which was used during the last execution of the hwclock command is used. This information is saved in the /etc/adjtime file and referenced during subsequent executions of the hwclock command.
- When the hwclock command is used to update the system clock, it refers to the /etc/localtime file to retrieve time zone details.
- Maintaining the System Clock
- The date command is used to display and set the system date and time. To view the current date and time
The system date can be displayed in different formats to suit a user’s needs. For example, to display only the month, day, and year, execute the following command: *date “+%m/%d/%y”*
Specifier Meaning %d Day of month (e.g., 30) %H Hour (0-23) %I Hour (1-12) %m Month (1-12) %M Minute (0-59) %S Seconds (0-60) %T Time (%H:%M:%S) %u Day of week (1-7, 1=Monday) %Y Year %F Full date; same as %Y-%m-%d - In addition to displaying and setting the date, the date command is regularly used in scripts for assigning filenames with timestamps suffixed to them.
- For example you can do this: *mv applog applog_`date +%F`*
- To change the system date, execute the following command as the root user: *date -s “01/02/2025 3:00:00”*
- Displaying and Setting the Time Zone
- There are two files that manage the time zone, depending on the distribution being used:
- The /etc/localtime File
- On some distributions (i.e., CentOS), the /etc/localtime file is used to configure the time zone of the system.
- The time zone data for different regions is maintained in the /usr/share/zoneinfo directory.
- To set up a particular time zone, a symbolic link is created from the /etc/localtime file to the corresponding file in the /usr/share/zoneinfo directory.
- To set the time zone to the America Tijuana time zone (PST), execute the following link ln command as the root user, then verify the change with date: *ln -sf /usr/share/zoneinfo/America/Tijuana /etc/localtime*
- This is also the Arch Linux way btw
- The /etc/timezone File
- On Debian-based systems (Ubuntu, Linux Mint) there is a secondary file /etc/timezone
- This can be changed by using a text editor to edit the /etc/timezone file to include the same time zone, relative to the /usr/share/zoneinfo directory (i.e., America/Tijuana), that was used to link to the /etc/localtime file.
- Users can override the system’s time zone by using the TZ environment variable. For example, if the client application running in Singapore needs to sync with the database instance running on a server in America, the user can change the time zone using the TZ environment variable to ensure that both applications run in the same time zone: *export TZ=America/NewYork*
- Network Time Protocol (NTP)
- The Network Time Protocol (NTP) is the most commonly used method for synchronizing the local server’s system time with the time provided by designated local or internet-based time servers. The precision provided by NTP is in the order of tens of millionths of a second, making it a very accurate method to maintain a computer’s system time.
- The reference time used by NTP is UTC. The NTP software will convert this to the appropriate time zone for any given system.
- The NTP package contains the NTP daemon and some additional programs to configure the service and query the time servers. Note that this package may need to be installed in some Linux distributions.
- The ntpd daemon is used to sync the time from an external database (ntp.org usually)
- The ntpd daemon can also be used as a server that is queried by other systems to sync their time
Key ntpd options:
Option Meaning -g Allow ntpd to be started on a system whose clock has crossed the panic threshold (1000 secs by default) -n Do not run ntpd as a daemon (i.e., run it as a foreground process) -c filename Use the specified file for configuration instead of the default file (the /etc/ntp.conf file) -N Run at the highest possible priority -q Quit after setting the time (i.e., one-time synchronization)
- /etc/ntp.conf File
- The /etc/ntp.conf file is the configuration file for setting up the ntpd daemon as either an NTP client or server.
- The NTP package provides a default configuration, which makes the system behave as an NTP client. If the system has access to the internet, this file should be configured automatically to use three NTP servers from the ntp.org domain.
- A sample of the file:
#List of public NTP servers to be queried server 0.pool.ntp.org iburst server 1.pool.ntp.org iburst server 2.pool.ntp.org iburst
restrict default ignore
restrict 127.0.0.1
driftfile /var/lib/ntp/ntp.drift logfile /var/log/ntpser.log
- *server 0.pool.ntp.org iburst*, indicates the server that the ntpd daemon queries in order to sync the time
- The iburst mode indicates that if the server is unreachable, then send a burst of eight requests instead of the usual one. It also serves to speed up the initial synchronization.
- *restrict default ignore*, the first restrict line is used to restrict access to other computers. This means that this computer will not act as an NTP server for other machines. Oddly enough, this also prevents your own system from getting date/time information from the ntpd daemon, which is the purpose of the next line.
- *restrict 127.0.0.1*, the second restrict line indicates that the localhost (127.0.0.1) will be able to monitor the ntpd daemon.
- *driftfile /var/lib/ntp/ntp.drift*, This file contains a value that is an average over time of how much the local time “drifts” from the NTP server time. Over time, this file will be consulted by the ntpd daemon to allow it to adjust the local time without having to contact the NTP servers as frequently.
- *logfile /var/log/ntpser.log*, it indicates the file where the logs are stored
- Running ntpd in server mode
- If this system is to be configured as an NTP server, then add a server line that has the current machine’s IP address and add the following restrict line: *server 127.127.1.0* *restrict default nomodify nopeer noquery*
- use *chkconfig ntpd on to* make the ntpd daemon to start at boot time
- Querying NTP
- The ntpq utility is used to query NTP and monitor the performance of the ntpd daemon. It can be executed either in command line mode or interactive mode. To print a summary of the peers of this server, the following command can be used: *ntpq –pn*
- Tracing NTP
- The ntptrace utility is useful for debugging and provides the trace of the chain of NTP servers to the source. It traverses the path starting from the localhost to the time servers from which the time has been derived.
- The important fields in the output are the hostname, the stratum number, and the time difference in seconds between the two hosts in the path traversed.
- Setting Time Using ntpdate
- Ey this is kinda obsolete
- *ntpdate 2.asia.pool.ntp.org* to set the date using an ntp server
- you can do all this with ntpd -q
- Setting Time Using timedatectl
- Systems using systemd as their init-system use the timedatectl command to view and control time on the system.
- Running the timedatectl command without any arguments shows the current time and the time settings for the system.
- The most commonly-used arguments for the timedatectl command are listed below:
Argument Description set-time Set the system clock to the specified time. This will also update the RTC time accordingly. The time may be specified in the format: 4-Digit Year-Month-Day Hour in 24-hr format: Minutes:seconds. set-timezone Set the system time zone to the specified value. This call will alter the /etc/localtime symlink. list-timezones List available time zones, one per line. Entries from the list can be set as the system time zone with set-timezone. timesync-status Show the current status of synchronization to the current Network Time Protocol (NTP) time source, such as the NTP server being used and the polling interval. set-ntp [BOOL] Controls whether network time synchronization is active and enabled. Using the BOOL if true or false will enable or disable using NTP on the system. - use *sudo timedatectl set-time “2019-10-18 08:24:00”* to change the date to something specific. This will not work if you have ntp active
- If you dualboot and don’t want your clocks to get fucked, use this command: *sudo timedatectl set-local-rtc 1 –adjust-system-clock*
- Understanding chronyd
- As an alternative to ntpd, chrony lends itself to working well in environments with intermittent network connectivity, such as on a laptop or virtual system that may be created through an automated process. Chrony is a set of programs that are used to ensure that the clock on a system is accurate.
- The daemon portion of chrony is the command chronyd. The daemon synchronizes the system with time retrieved from NTP servers. Along with synchronizing time on the system it is running, chronyd can also operate as an NTP server providing time service to other systems that are allowed network access.
- To control chronyd, you use the chronyc program to interface with chronyd via the command line.
- It can be used both in interactive or non-interactive modes
Common chronyc commands are listed below:
Argument Description tracking Displays performance statistics about the system clock sources Displays the NTP sources being used for chronyd activity Displays the status of NTP sources settime <TIME> Allows you to manually set the time used for chronyd. The format can be: hh:mm hh:mm:ss Month Day, YYYY hh:mm:ss
2.2.15. System Logging
- In the past, the sysklogd and klogd daemons were the two main components of syslogd that provided logging facilities for Linux. The sysklogd daemon provided applications and programs with logging services while the klogd daemon provided logging services for the Linux kernel.
- A system logging daemon with additional capabilities called syslog-ng was later released as a replacement for syslogd. The syslog-ng service provided more detailed message sorting and formatting than was available for syslogd.
- Many Linux distributions have replaced the combination of the syslogd and klogd daemons with the more recently developed rsyslogd daemon. The rsyslog service was designed as an alternative to syslog-ng.
- rsyslog uses the basic syslog protocol but expands it to provide additional capabilities such as message filtering, queuing to manage offline output, and additional configuration options. It also includes a timestamp and hostname field and often a program name field to improve the usefulness of logs.
- The rsyslogd daemon configuration settings are stored in the /etc/rsyslog.conf file. This file contains syntax largely backward compatible with the /etc/syslog.conf configuration file for the syslogd service running on legacy Linux systems.
- Log File Location
- By default, most of the log files are stored in the /var/log directory.
- For services or programs that maintain their own logging system instead of sending log messages to the rsyslogd daemon, these services typically store their log messages in a plain ASCII file that is in a subdirectory of the /var/log directory, such as the /var/log/httpd directory for the httpd daemon.
Some standard log files that are usually found in the /var/log directory are listed below:
File Purpose /var/log/messages or /var/log/rsyslog General message and system-related information /var/log/secure or /var/log/auth.log Authentication log /var/log/maillog Mail server logs /var/log/kern.log Kernel logs /var/log/boot.log System boot log /var/log/cron.log crond logs
- rsyslogd Configuration
- For describing what will be logged, the configuration file uses a selector.
- The selector is made up of two parts: a facility and a priority, separated by a period “.” character
- An action is used to describe where to send the log information
- Each line of the configuration file will specify both a selector and an action
- In the example two-line entry below, *authpriv.** is the selector and */var/log/secure* is the action: *authpriv.** */var/log/secure*
- Facility
_authpriv_.* */var/log/secure*
- The facility identifies the part of the system that produced some kind of message.
- For example, messages from the Linux kernel can be selected using the kern facility.
To make up the first part of a selector, the following standard facilities are identified by these keywords:
Facility Description auth Security and authorization-related commands authpriv Private authorization messages cron The cron daemon daemon System daemons ftp The ftp daemon kern The kernel lpr The BSD printer spooling system mail sendmail and other mail-related software mark Timestamps generated at regular intervals news The Usenet news system security Same as auth rsyslog rsyslogd internal messages user User processes uucp Reserved for UUCP local0 to local7 Eight flavors of local message
- Priority
authpriv.*_ */var/log/secure*
- The other part of the selector is the priority, which defines the severity of the message.
Priority is ordered from lowest to highest in this order:
Priority Description debug For debugging only info Informational messages notice Things that might merit investigation warning (or warn) Warning messages err Other error conditions crit Critical conditions alert Urgent situations emerg (or panic) Panic situations - Although the priorities in parentheses are equivalent to their counterparts, they have been deprecated. In other words, even though panic could be used to mean emerg, its use is discouraged because it may not be supported in the future.
- Priorities that are specified mean not only the level specified but anything of higher priority, as well. For example, specifying a priority of err would not only log err level messages, but also crit, alert, and emerg level messages.
- There is also a special priority called none, which means do not log from that facility. Typically none is used in conjunction with wildcard settings to limit the scope of the wildcard (as will be shown in an upcoming example).
- Selector
_authpriv.*_ */var/log/secure*
- The selector is comprised of both the facility and the priority separated by a period “.” character.
- The following table illustrates some common selectors.
- Note that an asterisk * wildcard character can be used to represent either all facilities or all priorities in a selector:
Selector Description **.** All facilities and priorities *.info All facilities at info priority or higher kern.* Select all kernel messages mail.warning Messages from the mail facility at a warning priority or higher cron,lpr.err Messages from the cron or lpr facility at an err priority or higher cron.err;cron.!alert Messages from the cron facility at an err priority or higher, but not at alert priority mail.=err Only err messages from the mail facility *.info;mail.none;lpr.none Select messages from all facilities except mail and lpr - Action
*authpriv.** _/var/log/secure_
- Combining a selector with an action results in a complete line in the /etc/rsyslog.conf file.
- The most common action is to specify the absolute path, the file that will store the information that is selected.
- The following table demonstrates the available actions:
Action Description /path/to/file Specify the full absolute path for the log file -/path/to/file The - before the path means to not sync after writing each log message (better for system performance for log files that are often written to, such as mail log files on a mail server) *pipe symbol* /path/to/named/pipe Specify a pipe symbol and a path to a named pipe file created with mkfifo (make first-in, first-out) /dev/tty10 Specify a terminal or console, such as /dev/console @10.0.0.1 Specify an @ symbol with the IP address or resolvable hostname or a remote host student,maya,joe Specify a list of users whose terminals will have the message displayed if the users are currently logged into the system * Send to the terminal of everyone who is logged in
- logger Command
- The logger command is used to send messages to the system logging facility.
The following options can be used with logger:
Option Purpose -i Log the process id of the logger process -s Log the message to standard error and the system log -f file Use the message found in the specified file -p selector Send the message as the selector like mail.info -t tag Mark the message line in the log with a tag - One of the main uses of the logger command is to verify that the entries that have been made in the rsyslog.conf file are working as expected.
- For example, consider the following entry which is designed to isolate mail facility errors into a file named /var/log/mail.err: *mail.=err /var/log/mail.err*
- After restarting the rsyslogd, this entry could be tested by using the following logger command: *logger -t TEST -p mail.err ’Testing mail.err entry’*
- Viewing the rsyslog contents of the /var/log/mail.err file would display something like the following: *tail -5 /var/log/mail.err*
- Managing Logs with logrotate
- The logrotate tool is used to allow a system administrator to automate the rotation of log files with different settings for different services.
- General settings for logrotate are controlled by the /etc/logrotate.conf file and service-specific settings are controlled with configuration files in the /etc/logrotate.d/ directory.
- The /etc/logrotate.conf file contains directives for the default configuration of the logrotate utility.
The following table summarizes the settings found in the /etc/logrotate.conf configuration file:
Directive Purpose weekly/daily/monthly/yearly Rotates the logs at the specified time interval rotate 4 Determines how many rotated logs are kept before logrotate deletes older logs compress Specifies logrotate to compress rotated logs missingok Tells logrotate not to return an error if the log file is not found - Files in the /etc/logrotate.d directory are loaded by the *include /etc/logrotate.d* statement in the /etc/logrotate.conf file.
- These files allow the system administrator to have different configurations for the logs of different services.
- If a specific setting is set for the log file in /etc/logrotate.conf or a configuration file in /etc/logrotate.d/, it will override the defaults.
- This is an example of a service having specific settings in the */etc/logrotate.conf* or */etc/logrotate.d* file:
/var/log/apt/term.log { rotate 12 monthly compress missingok }
/var/log/history.log { rotate 12 monthly compress missingok }
- systemd journal
- On systems using systemd as their init system, rsyslog has been replaced by the systemd-journal
- Instead of keping the logs in plain text format like rsyslog, systemd-journald keeps them in binary form in /var/log/journal
- In order to read the logs the utility *journalctl* is used
- systemd journal Configuration
- The /etc/systemd/journald.conf file controls the systemd-journal, but the most used directive controls the amount of space that is used for storing persistent logs found in /var/log/journal if the directory exists.
- Otherwise, the systemd-journal stores logs in volatile memory (RAM) located at /run/log/journal.
- Persistent storage is a type of storage used to ensure that data is not modified after it is stored and is available even if updates are made to the storage software.
- Files stored in volatile memory disappear when a computer is reset.
- By default, the systemd-journal uses up to 10% of the total partition space for persistent journal storage, with a cap of 4GB.
- However, that setting can be controlled with the SystemMaxUse directive in /etc/systemd/journald.conf
The following table summarizes common directives used in the /etc/systemd/journald.conf configuration file:
Directive Purpose Storage Determines how the journal will be stored The volatile option keeps the journal only in memory The persistent option stores the log data on the disk The auto option stores to the disk also, but will not create a log file it doesn’t already exist The none option does not store the journal data, but only displays it on the console Compress Specifies if the journal logs should be compressed or not. SystemMaxUse Limits the amount of space a log can use on the disk. By default, the limit is set to 10% of the total disk space with a cap of 4GB. SystemMaxFileSize Specifies the maximum size that an individual journal file can be before the file is rotated.
- systemd journal Log Management
- To interact with the systemd-journald, the journalctl command is used
- The output from the journalctl command uses a pager by default, so the Arrow Keys and the Page Up/Page Down keys can be used to navigate the output.
- In addition, the output from journalctl can span months and provide more information than is needed; therefore, command flags can be used to help narrow down the output.
The table below provides example flags for the journalctl command:
Option Purpose -b Limits output to only journal data since the last time the system booted. -u <systemd unit> Limits output to only contain output from the specified systemd unit. An example would be “journalctl -u postfix”. -n <number> Shows only the last of lines specified. -r Reverses chronology. Shows logs with the newest first and then each older entry in order. - The power of journalctl comes from the ability to use flags at the same time to output the data needed from the logs
- For example, this command will show only entries since the last boot (-b), in reverse chronological order (-r), and only for the systemd-timedated systemd unit (-u systemd-timedated): *journalctl -b -r -u systemd-timedated*
- To manage the log files created by systemd-journald, the journalctl command has flags to clear the log and set rotation due to time or size limits
These flags are summarized in the table below:
Option Purpose –rotate Rotates all of the systemd-journald log files immediately. –vacuum-time=<time> Removes any systemd-journald log data older than the time specified. Time can be in minutes (m), hours (h), weeks (weeks), or months (month). –vacuum-size=<size> Removes the oldest systemd-journald log data until the log data takes less than the size listed. - To demonstrate, the following command clears all of the systemd-journald log data that is older than 2 weeks: *journalctl –vacuum-time=2weeks*
- If a system is failing the boot sequence, the journal may contain important data. So in order to to recover the logs, mount the drive of the failing machine to a working one and use the *systemd-nspawn* command to view the logs of the failing machine
- After mounting the failed system’s disk, launch a systemd-nspawn container that allows access to the systemd-journald: *systemd-nspawn –directory /mnt/failedsys –boot – –unit rescue.target*
- Once the systemd-nspawn container has been spawned, the normal journalctl commands can be used to view the failed system’s systemd-journald logs
- Containers allow system designers to bypass traditional operating systems and access computing resources differently
- The systemd-nspawn service creates a type of container called a namespace container that runs on a partitioned set of kernel resources and operates in a separate environment
- Therefore, the systemd-nspawn command is derived from the action of spawning a namespace.
- systemd-cat
- Since systemd-journald stores data in a binary database, instead of text files, adding data to the logs requires the use of a tool
- The systemd-cat command allows you to add to the systemd-journald data
- The output from a command can be piped into systemd-cat to have the output from the command entered into the logs
- Similar to the logger command, using systemd-cat to send command output to logs can be used to verify that entries that have been made in the /etc/systemd/journald.conf file are working as expected.
- The systemd-cat command can be executed with the following syntax: *systemd-cat [OPTIONS…] [COMMAND] [ARGUMENTS…]*
- When piping a command to systemd-cat, all of the output is added to the systemd-journald, and no output shows up on the screen.
- To allow the output to show on the screen and add the output to the logs, you can use the tee command.
- For example: *ps | tee /dev/tty1 | systemd-cat* will show both the output of the command and pipe it into systemd-journald
- If journaling is disabled on a production server, it could indicate that the system has been compromised. The following output can verify whether journaling has been disabled: *echo “Hello” | systemd-cat*
- If it works, then everything is fine, if you get an error it means that the journaling has been disabled
2.2.16. Email Configuration
- The program or email client used to retrieve, read, and compose email is known as a Mail User Agent (MUA)
- mail Command
- The mail command is a built-in text-based Mail User Agent (MUA) for Linux that does not support attachments
- All the basic end-user operations such as sending, reading, replying, and deleting mail can be performed using the various command options that the mail command provides
- To view the mailbox of the current user, execute the mail command
- A summary of the inbox is displayed, followed by the ? mail prompt where subsequent mail commands can be entered
To view the list of all commands that are available while in the mail utility, type *list* after the mail prompt
Mail Command Used to n Read the next message (same as pressing Enter) h Display header information for all messages q Quit mail and preserve unread messages x Exit mail and preserve all messages (even if deleted) r [message #] Reply to current (indicated by >) or specified message # p Print the message again (re-read) d [message list] Delete current (indicated by >) or specified message(s) - To view the message contents, type the message number (from the 2nd column of the list of mail messages) after the mail promp
- If you choose to reply to an email you will have to fill the *To* and *Subject* fields first and progress by presing enter.
- To send the email press *Ctrl + D*
- Startup Options
Some of the key options of the mail command are:
Option Meaning -f filename Read and process the contents of the mailbox or the specified file -n Do not read /etc/mail.rc at startup time
- Sending Mail
- To send new mail to a user, type mail on the command line, followed by the recipient’s email address.
- The program will request a Subject:, which can be left blank by pressing the Enter key.
- After providing the Subject:, the cursor is placed on a blank line to type the body of your email.
- Once the message is completed, type Ctrl + D on a blank line.
- mailq Command
- Normally, messages are stored in a mail queue (“post office”) where they are held for a short period of time until emails that arrived first are sent
- This is known as the FIFO (first-in, first-out) method of delivery
- Once the message has been sent to a remote mail server, it is removed from the mail queue.
- If a message is in the mail queue for more than a few minutes, there is likely a problem with the delivery.
- This could be a temporary problem, as in the case when the remote mail server that the message is being delivered to is down
- Or, this could indicate a more serious problem, such as a misconfigured local mail server.
- The mailq command is used to query the mail messages queued for delivery. To view the current list of messages in the queue, execute the following command: *mailq*
The fields in the output are described below:
Fields Significance Queue ID Queue File ID, which contains an * or ! character An * means the message is in the active queue and is waiting to be delivered (or re-sent) An ! means the message is on hold and no further delivery attempts will be made Size Message Size Arrival Time Timestamp when the message arrived in the mail queue Sender/Recipients Email IDs of the sender and the recipients to whom delivery is still pending - The behavior of the mailq command is identical to the sendmail –bp command for systems that are using the sendmail service.
- To have messages in the mail queue re-sent, use the –q option to the mailq command.
- Aliasing Email Address
- Mail aliasing is a feature that allows alternative names (aliases) to be set up so that when the alias is entered as the recipient name, the message will be sent to the email address or group of email addresses (depending on how the alias is set up) that the alias represents
- The /etc/mail/aliases file defines the aliases.
- An alias can be created to an email address, a user name, a file, a command, or another alias.
- The entries in the file are in key-value format, as follows: *aliasname: name1, name2, name3*
- For example, to deliver all messages that are sent to the local support mailbox to the development team (ted, jaime, olivia, ian, and rita), place the following line in the /etc/mail/aliases file: *support: ted, jaime, olivia, ian, rita*
- This will deliver messages meant for the support mailbox to the mailboxes for the specified users.
- To deliver all messages for a user to a set of alternative mailboxes, place the following line in the /etc/mail/aliases file: *user1: user1@example.com;user1@example2.com*
- To send messages that are destined to the “applog” mailbox to a logging application (/usr/local/bin/trackissues), place the following line in the /etc/mail/aliases file: *applog: |/usr/local/bin/trackissues*
- To automatically discard email destined to system accounts (i.e., bin) or a user account (i.e., boss), redirect them to the /dev/null file in the /etc/mail/aliases file. For example: *bin: /dev/null* *boss: /dev/null*
- The sendmail program, a popular built-in command line email client, does not understand the format of the /etc/mail/aliases file, which is a flat data file.
- The sendmail program reads a binary format of the /etc/mail/aliases file, the */etc/mail/aliases.db* file.
- The *aliases.db* file stores the records in database format along with indexes to facilitate faster lookups.
- The */etc/mail/aliases.db* file is created by the *newaliases* command from the data provided in the /etc/mail/aliases file
- The *newaliases* command must be executed each time the */etc/mail/aliases* file is updated to build the */etc/mail/aliases.db* database.
- Mail Forwarding
- The *~/.forward* file, when placed in a user’s home directory, is used for automatically forwarding mail as it is received.
- When a user receives an email, the MTA program checks the user’s home directory for the *~/.forward* file
- If the file exists, then the message is sent to the address(es) or alias(es) specified in this file.
- A sample *~/.forward* file: *support, psgsupport*
- This will forward the incoming messages to the mailboxes of support and psgsupport
- Once the message is forwarded, a copy will not be retained in the user’s mailbox.
- Program names can also be specified in the *~/.forward* file.
- For example, the following will forward the incoming messages to the mailbox of support and to the vacation command: *support, “|/usr/bin/vacation”*
- The vacation utility is used to send auto-responses to mail. Note that it is not installed on every Linux distribution by default.
- SMTP Mail Protocol and Mail Transfer Agents
- Simple Mail Transfer Protocol (SMTP) is the standard protocol for communication between email servers. Most email systems that send mail over the internet use SMTP to send messages from one server to another;
- The messages are then retrieved with an email client using either POP3 (Post Office Protocol) or IMAP (Internet Message Access Protocol).
- SMTP can transfer mail over the same network or to some other network via a gateway. It uses TCP/UDP port number 25 for communication
- While configuring an email client, like Thunderbird, it is essential to specify the address of the SMTP server for outgoing mail.
- There are many MTAs available, each with their own strengths and weaknesses.
- Four of the most popular MTAs found on Linux systems are: sendmail, postfix, qmail, and exim
- qmail is legacy
- Sendmail
- The first version of sendmail was released in 1979 and was known as delivermail.
- Sendmail uses DNS (Domain Name System) for translating hostnames into their network addresses. It is designed to transport messages between various types of systems such as Solaris, Linux, and AIX.
- Sendmail has two major components: the sendmail program (referred to as the sendmail binary) and the sendmail configuration file (/etc/mail/sendmail.cf) to allow for complex customization.
- When a message arrives for delivery, it is processed as follows:
- If both the recipient and the sender are on the same machine, then sendmail delivers the message directly.
- If the sender’s and recipient’s machines share a UUCP (Unix to Unix Copy) connection, then sendmail uses the uux program to deliver the message.
- If the recipient’s address is an internet address, then sendmail uses SMTP to deliver the message.
- Since all messages cannot be delivered instantaneously, an intermediate storage location is required to hold messages for sending later.
- Sendmail saves such messages in queues, which are files or directories on the file system
- A message will be queued under the following conditions:
- Sendmail can be configured to queue all messages by default to protect against message loss in case the system crashes.
- If a message is intended for multiple recipients and delivery to some of the recipients fails, then the failed messages will be queued and retried again at a later time.
- If the destination machine is unreachable for any reason, then the message will be queued and scheduled for delivery only when the machine becomes available again.
- The header of a message is the most important component from the sendmail program’s perspective.
- The sendmail program will analyze the header for routing information and, based on the rules in the configuration file, process the message.
- The sendmail daemon manages the mail service. The /etc/mail/sendmail.cf file is used to configure the sendmail daemon.
- sendmail Command
- The main function of the sendmail command is to deliver pre-formatted messages
- The sendmail command is an alternative to the simpler mail command.
Some of the key options of the sendmail command are:
Option Meaning -B type Set the message’s body type to type, allowed values are 7BIT or 8BITMIME -bd Run in the background as a daemon -bD Run as a foreground process -bi Initialize the alias database -bp List the mail queue -bv Verify the address without sending an actual message -C file Use the specified file as the configuration file -R return Used when a message bounces. If set to full, then the entire message will be returned. If set to hdrs, then only the header will be returned. -t Read message for recipients. The To:, Cc:, and Bcc: lines will be searched for valid recipient addresses. - To send mail to the root user on the local system, execute the following command: *sendmail root@localhost*
- After entering the previous command, the cursor is placed on a blank line where the message can be entered via standard input (the keyboard).
- To send the message, enter a . (period) on a new line and press the Enter key
Message contents can be specified in a file and read by sendmail instead of typing manually like the example above. To use this method, create a file (i.e., sendmail.msg1) with the following contents: *From: sysadmin@localhost* *To: root@localhost* *Subject: Test*
*This is a test message!*
- To process this file, execute the following command: *sendmail -t -i < sendmail.msg1*
2.2.17. Printer Management
- The /etc/cups directory contains the configuration files for CUPS. The key configuration files for CUPS are as follows:
| File Name | Description |
|---|---|
| cupsd.conf | Server configuration file |
| printers.conf | Configuration file for individual printers |
| classes.conf | Configuration file for printer classes (groups of printers) |
| snmp.conf | Configuration file to regulate remote browse access |
| ppd/ | Directory for printer drivers for the printers configured on the server |
| ssl/ | Directory for SSL encryption keys for remote access |
- The /etc/cups/cupsd.conf file is used for configuring the CUPS server. Some of the commonly used directives in this file are as follows:
| Directive | Meaning |
|---|---|
| Allow | Allow access from the specified hostnames/addresses |
| Listen | Listen to the specified hostname/address |
| AccessLog | Access log file name |
| AuthType | Authentication type; valid values are: None (default value), Basic or Digest |
| DataDir | Directory for the data files |
| DefaultCharSet | Default charset for text |
| DefaultLanguage | Default language to be used for web and text content |
| Deny | Deny access to the specified hostnames/addresses |
| MaxCopies | Maximum number of copies that a user can print per job (default is 9999) |
| Browsing | Enables or disables browsing for locating remote printers (enabled by default) |
| BrowseOrder | Specify the order of access control (Deny,Allow or Allow,Deny) |
| BrowseAllow | Allow incoming printer information packets from the specified hostnames/addresses |
| BrowsePort | Port to listen to for printer information packets |
- The HTTP access control used in the CUPS configuration is adapted from Apache Server. The Allow and Deny keywords make it possible to allow, as well as deny, access by specifying the hostname/address. The Order keyword ensures that conditional access can be given.
For example, to allow access to all hosts in the netdevgroup.com domain while excluding those who are in the uk.netdevgroup.com subdomain, use the following directive:
*Order Allow,Deny* *Allow netdevgroup.com* *Deny uk.netdevgroup.com*
- The /etc/cups/printers.conf file is used by the cupsd daemon to store the list of available local printers.
- However the easiest method to configure cups if through the webpage that can be accessed bu using: http://localhost:631
- The /etc/cups/classes.conf file is used by the cupsd daemon to store the list of available local classes. Print classes are a set of printers that have been assigned a single name, so when a print job is sent to a print class, it will be printed by the first printer available in that class.
- By default, CUPS can only be administered by the root user. Users who are members of the group specified in the SystemGroup directive in the /etc/cups/cupsd.conf file can also administer CUPS.
- The command line alternative to the CUPS Web Interface program for adding CUPS printers and classes is the lpadmin command. For example, to add a new local printer, execute the following command: *lpadmin -p testprinter -E -v parallel:/dev/lp*
- This will add a new printer called testprinter on the parallel port. The –E option enables the printer and accepts jobs.
- To make testprinter the default printer, execute the following command: *lpadmin -d testprinter*
- To delete testprinter, execute the following command: *lpadmin -x testprinter*
- CUPS Scheduler
- The scheduler stores job files in the /var/spool/cups directory. Every print job scheduled will have one control file containing IPP message data and one or more data files.
Some of the key options of cupsd are:
Directive Meaning -c configfile Use the specified configuration file instead of the default (/etc/cups/cupsd.conf) -f Run as a foreground process -F Run as a foreground process but detach from the controlling terminal -t Verify the syntax of the configuration file
- CUPS Printing Queues
- The queues can be added in CUPS by either using the lpadmin command or by using the CUPS Web Interface.
- The queues can be of the following types:
- Locally-connected printer.
- Networked IPP (CUPS) – Refers to the queue of another CUPS printer server on the network.
- Networked UNIX LPD – Refers to the queue of an LPD server on the network.
- Networked Windows (SMB) – Refers to the queue of a Windows-based print server on the network.
- Networked Novell – Refers to the queue of a printer connected to the Novell Netware server on the network.
- Networked JetDirect – Refers to the queue of a network-connected Hewlett-Packard printer that prints data received on a TCP/IP port.
- To add a new printer queue using the lpadmin command, execute the following: *lpadmin -p news -h localhost -v /dev/npp0*
- The cupsenable and cupsdisable utilities are used to start and stop printers and classes, respectively.
- The utilities used to accept and reject print jobs to the specified destination are cupsaccept and cupsreject, respectively.
- To start a printer and enable queuing to accept jobs for a printer named salesdept, execute the following commands: *cupsenable salesdept* *cupsaccept salesdept*
- To stop queuing new jobs on a printer named news, execute the following command: *cupsreject news*
- Troubleshooting General Printing Problems
- The first thing to do when an error occurs in the CUPS service is to review the log files in the /var/log/cups directory. The different log files that are created:
- Access Log – The accesslog file contains the list of HTTP resources accessed by the clients or through any web browser. It uses a log format, which is identical to that used by web servers.
- Page Log – The pagelog file contains the accounting data for print jobs. This file will show information such as the printer name, user name, job number, date and time, current page number, and the number of copies.
- Error Log – The errorlog file contains error and warning messages from the scheduler. The data captured in this file depends on the setting of the LogLevel directive in the cupsd.conf file.
- The first thing to do when an error occurs in the CUPS service is to review the log files in the /var/log/cups directory. The different log files that are created:
- Understanding LPD
- The lpd daemon, which is typically started at boot time, handles the spooling of jobs. When a new job is queued using the lpr command, it will check for an available printer and then send the data to the printer.
- The lpd daemon uses the /etc/printcap file to discover the list of available printers. The format of this file is not user-friendly and makes configuring lpd complex.
- The lpr command (line printer) is used to send print jobs the printer. If a file name is specified, then it will be sent to the printer; otherwise, the data from standard input will be sent to the printer.
- For example, to print the info.txt file to the default printer, execute the following command: *lpr info.txt*
- To send the info.txt file to a specific printer named floor1, execute the following command: *lpr -P floor1 info.txt*
- To print 3 copies of the info.txt file, execute the following command: *lpr -# 3 info.txt*
- The line printer remove lprm command is used to delete queued print jobs
- To remove a job, either the user name or the job name could be specified. If the job name refers to a job currently being printed, then printing will be stopped and restarted after removing this print job.
- For example to remove the last job that was submitted, execute the following command: *lprm*
- This will delete the last job submitted by the user. A user is permitted to delete their jobs only. The root user can remove print jobs of other users.
- To remove all print jobs from all queues, use the following command: *lprm -a all*
- The line printer queue lpq command is used to view the printer’s queue status.
- To remove job 3 from the q2 printer’s queue, use the following command: *lpr -P q2 3*
- To view the status of the q2 printer, execute the following command: *lpq -P q2*
- To view the status of all printers, execute the following command: *lpq -a*
2.2.18. Networking Fundamentals
This is the OSI Table:
Layer Purpose 7. Application User interface, Application Programming Interface (API) 6. Presentation Data representation, encryption 5. Session Controls connections between hosts, maintains ports and sessions 4. Transport Uses transmission protocols to transmit data (TCP, UDP) 3. Network Determines path of data, IP 2. Data Link Physical addressing (MAC), delivery of frames (Protocol Data Units (PDU)) 1. Physical Transmits raw data between physical media
- IPv4 Addresses
- The IP address is made up of 4 octets, which are sets of 8-bit values.
- The value of each of these octets can range from decimal values 0 – 255 (or in binary, 00000000-11111111).
*00001010. 00001001. 00001000. 00000001*
______ ______ ______ ______ Octet #1 Octet #2 Octet #3 Octet #4
*Note*
The 8-bit binary format can be translated to decimal value by using a multiplier. Only bits set to 1 will use a multiplier, and the multiplier is based on the location of the bit in the octet. Bits set to 1 should be assigned a value of 2, and bits set to 0 should be assigned a value of 0. The following shows what multipliers to use, based on the location of the bit:
Multiplier 27 26 25 24 23 22 21 20 Value 128 64 32 16 8 4 2 0 To demonstrate, based on the information above, we can determine by adding the multiplied bits (8+2) that the octet 00001010 has a decimal value of 10:
Octet 0 0 0 0 1 0 1 0 Value 0 0 0 0 8 0 2 0
- An IPv4 network addressing scheme has been designed on the basis of the octets. It classifies networks into 5 classes: A, B, C, D, and E.
- Class A – The network is denoted by the first octet, and the remaining three octets are used to create subnets (to be discussed) or identify hosts on the network. The first bit of the first octet is always 0, so the range of values permissible is 00000001 – 01111111, i.e., 1 – 127 in decimal value (the first number of an IP address cannot be 0 by the definition of IP addresses). The network ID cannot have all bits set to either 1s or 0s, which means the total number of class A networks available is only 127. However, the 127 network is a special network referred to as a loopback network, not a real class A network that is used on the internet. An example of a class A address would be 65.16.45.126.
- Class B – The network is denoted by the 1st and 2nd octets, and the remaining 2 octets are used to create subnets or identify hosts. The 1st and 2nd bits of the 1st octet are set to 1 and 0 respectively, so the range of values permissible is 10000000 – 10111111, i.e., decimals 128 - 191. An example of a Class B address would be 165.16.45.126.
- Class B – The network is denoted by the 1st and 2nd octets, and the remaining 2 octets are used to create subnets or identify hosts. The 1st and 2nd bits of the 1st octet are set to 1 and 0 respectively, so the range of values permissible is 10000000 – 10111111, i.e., decimals 128 - 191. An example of a Class B address would be 165.16.45.126.
- Class C – The network is denoted by the 1st, 2nd, and 3rd octets, and the last octet is used to create subnets or identify hosts. The 1st, 2nd, and 3rd bits of the 1st octet are set to 1, 1, and 0 respectively, so the range of values permissible is 11000000 – 11011111, i.e., decimals 192 – 223. An example of a Class C address would be 205.16.45.126.
- Class D – These addresses are not assigned to network interfaces and are used for multicast operations such as audio-video streaming. The 1st, 2nd, 3rd, and 4th bits of the first octet are set to 1, 1, 1, and 0 respectively, so the range of values permissible is 11100000 - 11101111, i.e., decimals 224 - 239. An example of a Class D address would be 224.0.0.6.
- Class E – These addresses are reserved for future use.
- Understanding Network Masks
- In order for computers to communicate directly on the same network (i.e. not connected to another network via a router or gateway), all of the computers must be on the same subnet. A subnet is either an entire class A, B, or C network or a portion of one of these networks. To take a large class network and create a smaller portion, use a subnet mask.
- The subnet mask is used to differentiate the network and subnet components of the IP address. The subnet mask is not an IP address in itself; it is a numeric pattern used to indicate the portion of the IP address that contains the network identifier. The service provider will allocate a network from Class A, B, or C type, and use subnets to logically partition the network so that multiple sub-networks can be created.
The addresses for Class A, B, and C have default masks as follows:
Network Class Subnet Mask Class A 255.0.0.0 Class B 255.255.0.0 Class C 255.255.255.0 - For example, consider a standard Class A IP address 10.9.8.1 and its default subnet mask expressed in binary format:
- IP Address: *00001010. 00001001. 00001000. 00000001*
- Subnet Mask: *11111111. 00000000. 00000000. 00000000*
- In the subnet mask, the octets where the mask bits are 1 represent the network ID, whereas the octets where the mask bits are 0 represent the host ID.
*11111111. 00000000. 00000000. 00000000*
______ __________________________ Network ID Host ID
- In the previous example IP Address 10.9.8.1, the network ID is 10 and the host ID is 9.8.1.
- For an example of a custom subnet, assume that a class C network, 202.16.8.0, has been allocated. Network addresses with custom subnets are typically assigned by your internet service provider (ISP) and are used for systems that need to access the internet directly, such as web or email servers. In this case, you would take two bits from its default subnet mask and replace them with 1s as follows:
- Default Mask: *11111111. 11111111. 11111111. 00000000*
- Subnet Mask: *11111111. 11111111. 11111111. 11000000*
- Using the two bits will give 4 (22) subnets, the remaining 6 bits will give 64 (26) host addresses for each subnet. The address range will be as follows:
- 202.16.8.0 - 202.16.8.64
- 202.16.8.65 - 202.16.8.128
- 202.16.8.129 - 202.16.8.192
- 202.16.8.193 - 202.16.8.224
- The subnet mask 255.255.255.192 has partitioned the Class C network address into 4 sub-networks, and each of these sub-networks can be assigned to a particular group of machines.
- Public and Private IPv4 Addresses
- There are two types of IP addresses used on a network: public and private. The InterNIC (Network Information Center) is the global body responsible for assigning public addresses. They assign class-based network IPs, which are always unique. The public addresses are available with internet routers so that data can be delivered correctly.
- Most organizations use the internet for email and for browsing the web. To accomplish this, only systems such as email servers, web servers, and proxies (which handle intermediary requests from clients seeking resources from other servers) need direct connectivity to the internet so that users outside of the LAN can connect to these servers. On some occasions, additional services, like file-sharing servers, will need to have a public IP address for direct connection to the internet.
- On the other hand, users who work on their own machines and want to connect to the internet do not need a globally unique IP address. Instead, they can be assigned private IP addresses, which are then converted to public IP addresses by the gateway/router.
- There are three blocks of private addresses:
- *10.0.0.0/8* This Class A address allows the range of addresses from 10.0.0.1 to 10.255.255.254. The 24 bits from the host ID are available for subnetting.
- *172.16.0.0/12* This Class B network allows the range of addresses from 172.16.0.1 to 172.31.255.254. The 20 bits from the host ID are available for subnetting.
- *192.168.0.0/16* This Class C network allows the range of addresses from 192.168.0.1 to 192.168.255.254. The 16 bits from the host ID are available for subnetting.
- Comparing IPv4 and IPv6
- The IPv4 addresses are made up of four 8-bit octets for a total of 32-bits. This means the maximum number of possible addresses is 232, which is less than 4,294,967,296.
- The IPv6 addresses are based on 128-bits. Using similar calculations, as shown above, the maximum number of possible addresses is 2128, which gives a massively large pool of addresses. The IPv6 addresses consist of eight 16-bit segments, which means each segment can have 216 possible values. IPv6 addresses are usually expressed in hexadecimal format.
- A brief comparison of IPv4 vs. IPv6:
Feature IPv4 IPv6 Address Size 32-bit 128-bit Address Format Decimal dotted quad Hex notation 192.168.20.8 4AAE:F200:0342:AA00:0135:4680:7901:ABCD Number of addresses 232 2128 Broadcasting Uses broadcasting to send data to all hosts on a subnet No broadcast addresses, uses multicast scoped addresses as a way to selectively broadcast - Default Route
- All devices have routing tables, which contain routes used to calculate the optimal journey of the messages that it is responsible for forwarding through other routers in the same or other networks. When a computer sends packets to another computer, it consults its routing table. If a packet is being sent to a destination on the same subnet, no routing is needed, and the packet is sent directly to the computer. If a packet is being sent to the internet or another network, then the first “hop” or place a packet will go is to whatever is in the default gateway field (a network setting with the IP address of the network router) and lets the router (a network device that forwards IP packets) decide the optimal path forward
- The router for the network will have its own routing table, including its own default route (which router to send packets to when the destination is in another network or subnet). The routing table is a list of other routers that are connected to the current router. If the router receives a packet for a network destination that it has in its routing table (typically, this will be another local network), it simply forwards it. Otherwise, the router will send the packet to its default route (typically, this will be the way to get to the internet).
To view the existing routing table, execute the route command:
Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 192.168.1.0 * 255.255.255.0 UG 0 0 0 eth1 192.168.2.0 * 255.255.255.0 UG 0 0 0 eth0 - In the output of the kernel routing table, the first column contains the Destination network address. The word default signifies the default route.
- The second column contains the defined Gateway for the specified destination. In the event that an asterisk * is shown, it means that a gateway is not needed to access the destination network.
- The Genmask column shows the netmask for the destination network.
- In the Flags column, a U means the route is up and available, whereas the G means that the specified gateway should be used for this route.
- The Metric column defines the distance to the destination. This is typically listed in the number of hops (the number of routers between source and destination).
- The Ref column is not used by the Linux kernel.
- The Use column is used to define the number of lookups for the route.
- The Iface column is used to define the exit interface for this route.
- The default route in this example has been configured to use the eth1 interface, the second network card on the system. The network device with an IP address of 192.168.1.1 is designated as the default gateway. The default gateway is a router that will pass packets from this network to another network. The address that is specified for the default gateway must be on a network to which your system is connected.
- To understand the previous output, consider the following:
- If the designation addresses of the network packet are in the 192.168.2.0/24 network, then the packet is broadcast on the network that the eth0 network card is attached to. The source IP address of the packet will be 192.168.2.1 (the IP address assigned to the eth0 network card).
- If the designation addresses of the network packet are in the 192.168.1.0/24 network, then the packet is broadcast on the network that the eth1 network card is attached to. The source IP address of the packet will be 192.168.1.106 (the IP address assigned to the eth0 network card).
- All other network packets are sent to the router with the IP address of 192.168.1.1 via the eth1 network card.
- *route add default gw 192.168.1.1 eth1* (Now, if any of the routes in the routing table do not match the specified address, then the packet will be forwarded to 192.168.1.1 (the default route).)
- Understanding TCP
- The Transmission Control Protocol (TCP) provides connection-oriented service between two applications exchanging data. The protocol guarantees delivery of data.
- For example, consider accessing a server via a web browser. The user’s computer will resolve the IP address for the web server and connect to the web server via the standard HTTP port 80. After establishing the connection, the client and server processes exchange information about the socket size used to buffer data and the initial sequence number of packets.
- The sequence number mechanism in the header ensures ordered delivery of data. The web server will then service GET requests sent on the HTTP port for web pages. For error control, TCP uses the acknowledgment number in the header. The client sends the acknowledgment number to the server. If the server sends 2000 bytes of data to the client and the client acknowledges only 1000 bytes, then it indicates loss of data. The web server will then retransmit the data.
- Using FTP
- FTP is a protocol that uses TCP for transport and reliable delivery. The ftp command provides the user interface to the standard File Transfer Protocol (FTP). Using the ftp utility, a user can transfer files to and from remote machines. It can be used for UNIX as well as non-UNIX machines. The service is provided by the ftpd daemon, which uses the TCP protocol and listens to the FTP ports specified in the /etc/services file for FTP requests.
- To connect to a particular host via ftp, execute the command: *ftp ftpserverhostname [or IP address]*
- To execute other commands on your local machine while logged in to the FTP server, prefix a command with an ! exclamation point.
- The default file transfer mode for the ftp utility is ASCII, which is used for ordinary text files. To transfer other types of files (i.e. program files, zip files, or tar files, etc.), it is recommended that the server is in binary transfer mode. To set the file transfer mode to binary, execute the command: *ftp> bin*
- To download multiple files, use the mget command. If you wanted to download all the \*.tar files in a particular directory from the server to the local machine, execute the following command: *ftp> mget *.tar*
- Using Telnet
- To open a telnet session to the server, execute the following command: *telnet hostname [or IP address]*
- Understanding UDP
- User Datagram Protocol (UDP) provides connectionless service between two applications exchanging data. Unlike TCP, UDP has no error control and does not guarantee the transfer of data.
- UDP sends data without notifying the receiver prior to sending. As a result, it does not offer either ordered or reliable delivery. UDP is like the traditional postal system; you are not notified that a letter will be delivered to your mailbox.
- The header of UDP packets is lightweight as compared to TCP packets since it does not contain sequence or acknowledgment numbers. It uses a simple, optional checksum mechanism for error-checking. UDP is faster than TCP and is used in services such as VoIP, streaming video (Netflix), and DNS (Domain Name Service).
- /etc/services File
- In order to make it easy to distinguish between packets destined for different services, each service is assigned one or more port numbers. If you consider an IP address to be like a street number for an apartment complex, a port number is like a number for a specific residence within the apartment complex.
- The /etc/services file is used for mapping application service names to port numbers. This file exists on each system and can be modified only by the root user. Generally, there is no need to modify this file since most of the services use their own configuration files to determine port numbers; however, this file does provide a good reference for standard port numbers.
- The /etc/services file is queried by programs using the getportbyname() Application Program Interface (API) to determine the port number that they should open a socket connection to. For example, if the finger command is used to do a name lookup for a user on a remote machine, then it executes a getportbyname() API for the finger service and fetches the corresponding port number, which is 79.
- The use of this API is now fairly rare, typically reserved for legacy UNIX services. Most services in modern Linux use separate configuration files to specify the ports that they communicate through. However, the /etc/services file is useful as most default service configuration files will initially have the same port numbers as found by the /etc/services file. In the cases that the numbers are different, it was likely a change made by the local system administrator.
- Querying DNS Servers
- The host and dig commands are used for DNS (Domain Name System) lookups. A DNS server provides hostname to IP address translation.
- The host command is used to resolve hostnames to IP addresses and IP addresses to hostnames. The utility uses UDP for transport of queries to the servers listed in the /etc/resolv.conf file.
- To find the IP address of a host, execute the following command: *host nadasan.tech*
- To find the DNS servers for a domain, do not specify the host and use the -t option with an argument of ns, and execute the command: *host -t ns nadasan.tech*
- The dig (Domain Information Groper) command is used for troubleshooting the configuration of DNS servers.
- DNS server administrators like the output of the dig command because it is in the same format that the information is entered into a DNS server configuration file.
- The utility performs DNS lookups and displays the responses received from the name servers listed in the /etc/resolv.conf file.
- To view the trace of domain name servers from the servers where the lookup begins, and each name server along the way, execute the following command: *dig +trace example.com*
- For a reverse lookup, using an IP address instead of hostname, execute the following command: *dig -x 192.168.1.2*
Some of the key options of this command are:
Option Meaning -f filename Operate in batch mode by reading a list of lookups to be done from the specified file -p port# Query the specified port other than the standard DNS port -4 Use IPv4 query transport -6 Use IPv6 query transport
- Understanding ICMP
- The TCP protocol provides an error control mechanism but does not contain information about possible reasons for errors.
- The Internet Control Message Protocol (ICMP) is a diagnostic protocol used to notify about network problems that are causing delivery failures.
- This protocol is considered as a part of the IP protocol, though it is processed differently than normal IP packets. Some of the common types of ICMP messages are:
- Destination Unreachable
- Redirect (i.e., use an alternative router instead of this one)
- Time exceeded (i.e., IP TTL exceeded)
- Source Quench (i.e., host or router is congested)
- Echo Reply/Request (i.e., the ping command)
2.2.19. Network Configuration
- TCP/IP Configuration
- To configure a network port, or interface, on legacy systems, use the ifconfig command
- This command is used for the following functions:
- Assigning static IP address
- Viewing current network configuration
- Setting the netmask
- Setting the broadcast address
- Enable/disable network interfaces
- The ifconfig command can be used without options or arguments to display all interfaces on the network, or with options and an interface name as an argument: *ifconfig [INTERFACE] [OPTIONS]*
- The lo device is referred to as the loopback device. It is a special network device used by the system when sending network-based data to itself.
- To view all the network interfaces on the system, execute the *ifconfig –a* command. Typically, this output won’t be any different from the previous ifconfig command unless there are some interfaces that are not currently active.
- To view the details of a specific interface, execute the following command: *ifconfig eth0*
- To assign an IP address to an interface, execute the following command: *ifconfig eth0 192.168.1.3*
- To assign a netmask to an interface, execute the following command: *ifconfig eth0 netmask 255.255.255.192*
- To assign a broadcast address to an interface, execute the following command: *ifconfig eth0 broadcast 192.168.1.63*
- The ifconfig command can be used to enable and disable an interface. To disable (deactivate) a network interface, execute the following command: *ifconfig eth0 down*
- Setting the Hostname
- The hostname is used to identify the system by applications such as web servers.
- On Debian-derived and modern Red Hat-derived systems, the /etc/hostname file contains this information, while legacy Red Hat-derived systems store this information in the /etc/sysconfig/network file.
- This file is read at boot time to set the hostname.
- The hostname command is used to set and view the system’s host and domain name. It is the system administrator’s responsibility to assign an appropriate hostname.
- It cannot be longer than 64 characters and can contain alphanumeric [a-z] [0-9], period . characters, and hyphen - characters only.
- To view the currently assigned hostname of the system, execute the hostname or short name cut at the first dot hostname -s, or full domain name hostname -f command: *hostname -s*
- To set the hostname of the system, the root user can execute the following command: *hostname example.com*
- Note that setting the hostname using the hostname command results in a change that is only persistent until the next system boot.
The /etc/hosts file is used for mapping hostnames with IP addresses. It is a flat-file with one record on each line. The format of the file is:
Ip Address Host Name Alias - A sample /etc/hosts file will look like the following: *127.0.0.1 localhost* *192.168.4.8 apps.sample.com apps* *192.168.4.12 vm1.sample.com vm1*
- The Alias field is used for mapping short names or labels to a host.
- The functionality of the /etc/hosts file has been relegated by DNS but is still used in the following situations:
- Bootstrapping: This file is referred to during system startup since the DNS service is not started at this point.
- Isolated Nodes: If a node is not connected to the internet, it is unlikely to use DNS. The /etc/hosts file is useful for such nodes.
- NIS: The records in the hosts file are used as input for the NIS (Network Information Services) database.
- Systemd systems use an alternative to the hostname command, the hostnamectl command.
- Similar to the hostname command, the hostnamectl command can also be used to query and set system hostnames, but the hostnamectl command provides additional categories for hostnames; static, pretty, and transient which are described below:
- Static: A static hostname is limited to [a-z], [0-9], hyphen -, and period . characters (no spaces i.e., localhost or ndg-server). This hostname is stored in the /etc/hostname file. Static hostnames can be set by a user.
- Pretty: Hostname can be in a human-readable format using any valid UTF-8 characters and can include special characters (i.e., Sarah’s Laptop or Joe’s Home PC).
- Transient: The transient hostname is a dynamic hostname usually set by the kernel to localhost by default. A dynamic hostname can be modified if needed. The transient hostname can be modified by DHCP or mDNS at runtime.
- Hostnames can be up to 64 characters, but it is recommended that static and transient hostnames are limited to only 7 bit ASCII lowercase characters with no spaces or dots and conforming to strings acceptable for DNS domain names.
- Configuring DNS
- The DNS (Domain Name System) is the mapping table for the internet, allowing any computer or device to access websites, mail servers, etc. by using a name (i.e., google.com, mail.comcast.net) instead of an IP address.
- The DNS implementation is based on a distributed database of network names and IP addresses and query interfaces to retrieve information.
- The /etc/resolv.conf file is the configuration file for DNS resolvers. The information in this file is normally set up by network initialization scripts.
- If DNS servers are like giant phone books of domain names and IP addresses, the /etc/resolv.conf file is used to tell a computer where the phone book is located on the network or internet.
A sample /etc/resolv.conf file looks like the following: *# /etc/resolv.conf* *domain sample.com* *search sample.com* *# central nameserver* *nameserver 191.74.10.12*
*sortlist 191.74.10.0 191.74.40.0*
- The format of the /etc/resolv.conf is: *directive value1, value2…*
The configuration directives used in this file are:
Option Meaning nameserver IP address of the name server that the resolver will use Maximum of 3 servers can be listed domain Domain name to be used locally search Search list to be used for hostname lookup sortlist Allow addresses to be sorted The list is specified by IP addresses and optionally the netmask options Used to modify the resolver’s internal variables using certain keywords E.g. attempts: 3 will set the retry count for querying the name servers to 3
- Name Service Switch
- The Name Service Switch (NSS) is used by the system administrator to specify which name information source (i.e., local files, LDAP, etc.) to use for different categories (i.e., passwd, shadow, hosts, etc.), and in which order the sources are searched.
- The client applications query the name service database using APIs such as:
- gethostbyname()
- getaddrinfo()
- getnetent()
- The /etc/nsswitch.conf file is used to store the information used for name service switching. It is a text file with columns that contain the following information:
- Database name
- Lookup order of sources
- Actions permitted for the lookup result
- A process that needs to lookup host or network-related information will refer to the configuration for the required database in this file.
- A sample portion of an /etc/nsswitch.conf file will look like the following: *passwd: compat* *group: compat*
*…* *hosts: files dns* *networks: files*
- The first column contains the database name, followed by the services to be queried in the order of their occurrence in the file.
- For example, the current sample of the /etc/nsswitch.conf file demonstrates the hosts database services configured like the following: *hosts: files dns*
- When a hostname lookup is performed, the files entry will make use of the /etc/hosts file to perform the resolving. If the query does not return any results, then the query will be sent to the DNS resolver.
- By changing the order of the name services listed for a particular database, like hosts, the administrator could change whether the local /etc/hosts file is consulted before or after the DNS servers listed in /etc/resolv.conf: *hosts: dns files*
- In the event of the query not returning any results, specific actions can also be mentioned in the /etc/nsswitch.conf file: *hosts: dns [NOTFOUND=return] files*
- In the above example, the DNS resolver will try to resolve the hostname. If a match is not found, then the resolver will immediately return the NOTFOUND status and the /etc/host file will not be queried. The /etc/host file will only be queried if the DNS resolver service itself is unavailable for some reason.
- Configuring Routing Tables
- As discussed in the previous chapter, routing tables are used by the kernel to store information about how to reach a network directly or indirectly. The route command is used to view, as well as update, the IP routing table.
- Any system using the TCP/IP protocol to send network packets will have a routing table. The routing function is managed by the IP layer. The routing table will decide the forwarding IP address for the packet.
- Static routes in the kernel’s routing table can be set using the route command (note, as shown in the previous chapter, the ip command can also display and modify routes).
- For instance, to be able to reach the 192.56.78.0/255.255.255.0 network, a router like 192.168.1.1 could be used, if that machine is connected to both the 192.168.1.0 network and the 192.56.78.0 network (it would likely have two network interfaces).
- To add this route to the eth0 interface, the administrator could execute the following sudo command (or as root): *sudo route add -net 192.56.78.0 netmask 255.255.255.0 gw 192.168.1.1 dev eth0*
- To add a default gateway, execute the following command: *route add default gw 192.168.1.1*
- To verify the connectivity to a network that is now available via the new route, execute the following ping command, using an IP address of a machine that should be available on the newly-accessible network: *ping 192.168.1.1*
- If a setup is required where a particular host is blocked when packets are routed, then execute the command: *route add host 192.168.1.62 reject*
- This will make the specified host unreachable.
- To delete a route from the routing table, an administrator can execute a command like the one that added it, except using del instead of add: *route del -net 192.56.78.0 netmask 255.255.255.0 gw 192.168.1.1 dev eth0* *route del default gw 192.168.1.1*
- Network Interface Configuration
- The term network interface refers to the point of connection between a computer and a network. It can be implemented in either hardware or software.
- The network interface card (NIC) is an example of the hardware interface, while the loopback interface (127.0.0.1) is an example of the software interface.
- The Linux system comes with default drivers for the general network interfaces. If the NIC (Network Interface Card) can be loaded using the default driver, then it will be detected during initialization. If the NIC is not supported by the default driver, then the driver will have to be loaded into the kernel before the card can be used.
- For example, by performing some research, the administrator has determined that the driver or kernel module that is needed for a network interface is called veth. To manually load this driver, execute the following sudo command: *modprobe veth*
- To view information about the driver, the list hardware lshw command can be used: *lshw -c network | grep veth*
- To verify if the driver has been loaded correctly, execute the following command: *lsmod | grep veth*
- If the details of the driver are shown, then the driver has been successfully installed
- To temporarily assign the IP address 192.168.10.12 to the eth1 device, execute the following command: *ifconfig eth1 192.168.81.12 netmask 255.255.255.0*
- The UP status indicates that the interface has been enabled and the RUNNING status indicates that the configuration of the interface is complete and it is operational. The RX (receive) and TX (transmit) packet counts have increased, which indicate that network traffic is being routed using the eth1 interface.
Some of the fields in the output significant for analyzing network errors are:
Option Meaning RX errors Number of received packets which were damaged RX dropped Number of packets dropped due to reception errors RX overruns Number of received packets which experienced data overrun RX frame Number of received packets which experienced frame errors TX errors Number of packets which experienced data transmission errors TX dropped Number of packets dropped due to transmission errors TX overruns Number of transmitted packets which experienced data overrun TX carriers Number of transmitted packets which experienced loss of carriers TX collisions Number of transmitted packets which experienced Ethernet collisions possibly due to network congestion
- Red Hat Interface Configuration
- On a legacy Red Hat-derived system, the /etc/sysconfig/network file contains host and routing details for all configured network interfaces. A sample file would look like the following: *NETWORKING=yes* *HOSTNAME=gsource1.localdomain* *GATEWAY=192.168.122.1*
- For each network interface, there is a corresponding interface configuration script file /etc/sysconfig/network-scripts/ifcfg-<interface-name>.
- A network interface can have its settings automatically assigned via a DHCP (Dynamic Host Configuration Protocol) server or statically assigned within this file. Any text following a # is considered a comment and is used for documentation.
- Any GATEWAY specified in an interface configuration file would override the GATEWAY specified in the /etc/sysconfig/network file.
- A sample file /etc/sysconfig/network-scripts/ifcfg-eth0 for the eth0 device where the interface is configured automatically via DHCP would look like the following: *DEVICE=“eth0” # name of the device* *NMCONTROLLED=“no” # device is not NetworkManager managed* *ONBOOT=yes # activate interface automatically* *TYPE=Ethernet # type of interface* *BOOTPROTO=dhcp # use a DHCP to configure interface*
- On a Red Hat-derived system, a static configuration of the /etc/sysconfig/network-scripts/ifcfg-eth0 file would look like the following: *DEVICE=“eth0” # name of the device* *NMCONTROLLED=“no” # device is not NetworkManager managed* *ONBOOT=yes # activate interface automatically* *TYPE=Ethernet # type of interface*
*BOOTPROTO=none # use static configuration* *IPADDR=192.168.0.3 # set the IP address* *NETMASK=255.255.255.0 # set the subnet mask* *GATEWAY=192.168.0.1 # set the default router* *DNS1=192.168.0.254 # set the primary DNS server*
- Debian Interface Configuration
- For Debian-derived systems, the /etc/network/netplan directory contains files that are used to configure the interfaces.
- A sample * .yaml (YAML Ain’t Markup Language) human-readable file for an interface that uses DHCP for address configuration would look like the following: *# This file describes the network interfaces available on your system* *# For more information, see netplan(5).* *network:* *version: 2* *renderer: networkd* *ethernets:* *ens3:* *dhcp4: yes*
- A sample interfaces file for using static addresses would look like the following: *network:* *version: 2* *renderer: networkd* *ethernets:* *eth0:* *addresses:* *- 10.10.10.2/24* *gateway4: 10.10.10.1* *nameservers:* *search: [mydomain, otherdomain]* *addresses: [10.10.10.1, 1.1.1.1]*
- NetworkManager
- Originally developed by Red Hat, NetworkManager provides automatic detection and configuration of network interfaces on a Linux system. It works for both wired and wireless interfaces as well as having support for some modems and Virtual Private Network (VPN) connections.
- *nmcli [OPTIONS] OBJECT [COMMAND][ARGUMENTS…]*
- The OPTIONS for the nmcli command can be found by visiting the nmcli man page or by using the nmcli -help command. Commonly used options include the terse -t option that displays concise output and the pretty -p option, which makes the output easily readable by printing headers and aligning values.
The OBJECT field can be one of the following:
Object Meaning general Display information about or modify the status of NetworkManager. networking Display information about or modify the network managed by NetworkManager. connection Display information about or modify connections managed by NetworkManager. device Display information about or modify devices managed by NetworkManager. radio Display the status of, and enable or disable, the radio switches. - The nmcli command can also be used to create a new connection. To add a connection, the following syntax can be used: *nmcli con add {OPTIONS} [IP]/[NETMASK] [GATEWAY]*
For example, to add a connection named eth1, define the connection as Ethernet and specify the IP address, network mask, and gateway, the following command can be used: *nmcli con add con-name eth1 ifname eth1 type ethernet \ip4 10.0.2.18/24 gw4 10.0.2.2*
Options meaning con-name Specifies the name of the network connection. In the example above, the con-name eth1 option adds a connection named eth1. ifname Name of the interface (device) that is used for the connection. type Specifies the connection type. Various types of connections exist such as ethernet, wifi, VLAN, bridge, and more. ip4 Specify an IPv4 IP address and netmask for the connection. gw4 Specify the gateway used for the connection. - The con show command can be used to view the new connection: *nmcli -p con show eth1*
- Wireless Interfaces
- To enable wifi support use: *nmcli radio wifi on*
- To scan for networks, use: *nmcli dev wifi list*
- To connect to a SSID use: *nmcli dev wifi con /Name/ password “12345”*
- iproute2 tools
- It is important to mention that many of the tools used in this module, such as the ifconfig command, are being phased out in newer Linux distributions in favor of the iproute2 suite of tools.
The table below shows some of the configuration and troubleshooting utilities and the ip commands that replace them:
Legacy net-tools Replacement iproute2 Commands Usage ifconfig ip address, ip link, ip -s Configure addresses and links route ip route Manage routing tables arp ip neigh Display and manage neighbors (hosts that share the same IP address/link) iptunnel ip tunnel Manage tunnels (shared communication channel between networks) nameif ifrename, ip link set name Manage network interface name ipmaddr ip maddr Manage multicast (group of hosts on a network) netstat ip -s, ss, ip route Display network information
- systemd-networkd
- A great benefit to using modern Linux systems running systemd is the systemd-networkd system daemon. This background program detects and manages network configurations, automatically configuring devices as they appear, such as when a USB Ethernet connector is plugged in, or a WiFi radio is turned on. The systemd-networkd daemon is also useful for creating virtual devices such as the devices used with containers and other cloud objects.
- The systemd-networkd daemon functions through configuration files, which reside in the /usr/lib/systemd/network/, /run/systemd/network, and /etc/systemd/network directories. Like other systemd configuration files, there are numerous options available for administrators to specify how devices should be configured on startup.
- Name resolution services on systems that use systemd are handled by systemd-resolved. This systemd service tells local applications where to find domain name information on a network. The systemd-resolved service can operate in four different modes, which are:
- Using a systemd DNS stub file located at /run/systemd/resolve/stub-resolv.conf.
- Preserving the legacy resolv.conf file we learned about earlier in this chapter.
- Automatic configuration with a network manager.
- Manual, or local DNS stub mode where alternate DNS servers are provided in the resolved.conf file.
- It is important to understand how resolv.conf and systemd-resolved interact with each other to ensure proper DNS configuration. The systemd-resolved system service creates its own DNS/DNSSEC stub resolver that local applications can use for network name resolution. It also reads the data in /etc/resolv.conf to discover other DNS servers configured in the system. This compatibility function only works directly on the /etc/resolv.conf, not on symlinks.
- The systemd-resolved service provides a tool called resolvectl, which can be used for resolving domain names, IPv4 and IPv6 address, and DNS resource records and services. The syntax below shows how to use the resolvectl command: *resolvectl [OPTIONS…] {COMMAND} [NAME…]*
2.2.20. Network Troubleshooting
- Using netstat
- The netstat command is used by the system administrator to monitor the traffic on the network, and check connections that are not trustworthy.
- While administrators should be familiar with the netstat command, it should be noted that it is a legacy command that is being phased out as new systems come online. The ss command, covered in the next section, should be used in most cases.
- To list all ports, execute the following netstat command: *netstat*
- The output of the previous command lists all ports, including those that are not currently being used. Those ports that are currently being used (active) are marked with the state LISTEN. To view only the listening ports, execute the *netstat –l* command.
- To display a summary of details for each protocol, execute the following command: *netstat -s*
- To view the kernel’s routing table, execute the *netstat –r* command:
- To view the details of specific interfaces, use the interface –i option when executing the netstat command: *netstat -i*
- If you include the –c option, the netstat command will display the interface information continuously after an interval of one (1) second. This is useful to watch the activity on the interfaces over a period of time.
- The netstat command is also commonly used to display open ports. A port is a unique number that is associated with a service provided by a host. If the port is open, then the service is available for other hosts.
- For example, you can log into a host from another host using the SSH service. The SSH service is assigned port #22. So, if port #22 is open, then the service is available to other hosts.
- To see a list of all currently open ports, use the following command: *netstat -tln*
- In the previous example, -t stands for TCP (recall this protocol from earlier in this course), -l stands for listening (which ports are listening) and -n stands for show numbers, not names.
- Sometimes showing the names can be more useful, this can be achieved by leaving out the -n option
- Using ss
- The ss command is designed to show socket statistics and supports all the major packet and socket types. Meant to be a replacement for and to be similar in function to the netstat command, it also shows a lot more information and has more features.
- A network socket is a communication endpoint between nodes (devices) on a network. Sockets use a socket address to receive incoming network traffic and forward it to a process on a machine or device. The socket address commonly consists of the IP address of the node that it is “attached” to and a port number.
- The main reason a user would use the ss command is to view what connections are currently established between their local machine and remote machines, statistics about those connections, etc.
- To use the ss command, follow the syntax below: *ss [options] [FILTER]*
- Similar to the netstat command, you can get a great deal of useful information from the ss command just by itself.
The output is very similar to the output of the netstat command with no options. The columns above are:
Netid the socket type and transport protocol State Connected and Unconnected, depending on protocol Recv-Q Amount of data queued up for being processed having been received Send-Q Amount of data queued up for being sent to another host Local Address The address and port of the local host’s portion of the connection Peer Address The address and port of the remote host’s portion of the connection - The format of the output of the ss command can change dramatically, given the options specified, such as the use of the -s option, which displays mostly the types of sockets, statistics about their existence and numbers of actual packets sent and received via each socket type
- One common use for the ss command is determining which ports an interface is listening on.
- By using the -l option to list only those ports which are listening, and the -t option to show only TCP ports, you can quickly determine which ports are available for TCP communications.
- When troubleshooting UDP dependent services like DNS name resolution or some video streaming applications, you can use the -u option to display just the UDP sockets available. The example below only lists UDP sockets which are currently listening.
- Using ip
- The ifconfig command is becoming obsolete (deprecated) in some Linux distributions and is being replaced with a form of the ip command, specifically ip address.
- The ip command differs from ifconfig in several important ways, chiefly that through its increased functionality and set of options, it can almost be a one-stop-shop for configuration and control of a system’s networking.
- The format for the ip command is as follows: *ip [OPTIONS] OBJECT COMMAND*
- While ifconfig is limited primarily to the modification of networking parameters, and displaying the configuration details of networking components, the ip command branches out to do some of the work of several other legacy commands such as route and arp.
- The ip command can initially appear to be a little more verbose than the ifconfig command, but it’s a matter of phrasing and a result of the philosophy behind the operation of the ip command.
- Another useful option with the ip command is the -statistics or -s option, which shows statistics for the object referenced in the command.
- Using ping
- The count -c option stops after sending n packets as seen in the example below, which sends 5 packets (-c 5) and then stops
Some of the key options of the ping command are:
Option Meaning -c count Stop after sending count ECHOREQUEST packets -s packetsize Specifies the number of data bytes to be sent -t ttl Sets the IP Time to Live -w timeout Sets the timeout in seconds for ping to exit - The ping6 command is similar to the ping command, but it uses ICMPv6 ECHOREQUEST to verify network connectivity.
- The ping6 command can use either a hostname or an IPv6 address to request a response from remote systems. Similar to the ping command, the ping6 command will continue pinging until Ctrl+C is typed in the terminal.
- Using traceroute
- The traceroute command is used to trace the route of packets to a specified host.
- This utility uses the IP header’s TTL field and tries to fetch an ICMP TIMEEXCEEDED response from each router on the path to the host
- The probing is done by sending ICMP ping packets with a small TTL value and then checking the ICMP TIMEEXCEEDED response.
- Network administrators use this command to test and isolate network problems. This command can be run by root users only.
- To trace the route to a particular host, execute the following command: *traceroute example.com*
Some of the key options of the traceroute command are:
Option Meaning -T Probe using TCP SYN -f firstttl Specifies the initial TTL value -m maxttl Specifies the maximum number of hops to be probed -w timeout Sets the timeout in seconds to exit after waiting for a response to a probe - The traceroute command, commonly used for seeing how a transmission travels between a local host machine to a remote system can also be used for IPv6 connections.
- To use the traceroute6 command, which is the same as traceroute -6 to view the IPv6 path to ipv6.google.com execute the following command: *traceroute6 ipv6.google.com*
- Using tracepath
- The tracepath command is used to trace the path to a network host, discovering MTU (maximum transmission unit) along the path.
- The functionality is similar to traceroute. It sends ICMP and UDP messages of various sizes to find the MTU size on the path.
- Using UDP messages to trace the path can be useful when routers are configured to filter ICMP traffic.
- To trace the path to a host, execute the following command: *tracepath netdevgroup.com*
- Much like the traceroute6 command, the tracepath6 command can also be used to determine what route communications travel between local and remote systems.
- The tracepath and tracepath6 command use the sockets API to map out paths, which can be useful when routers are configured to filter out ICMP traffic.
- Using ethtool
- The ethtool utility is useful for configuring and troubleshooting network devices such as Ethernet cards and their device drivers.
- *ethtool [OPTION…] devname*
- In the example below, the ethtool command is used with the -i or –driver option to show the first twenty lines of driver information for Ethernet device ens3. *ethtool -i ens3 | head -n 20*
- The ethtool command can also be used to display other useful troubleshooting information, such as the speed of an interface.
- First, an administrator could determine the name of the interface, by using the ifconfig or IP address command, then the following command can be executed to determine the speed of that interface: *ethtool ens3 | grep Speed*
- Using ip neighbor
- One of the most useful things to know when troubleshooting networks is what machines are on the same network segment as you.
- The ip neighbor command, part of the iproute2 command suite, is used to add, change, or replace entries in the neighbor tables, also known as ARP (Address Resolution Protocol) cache tables
- To display the ARP cache on a specific interface, use the ip neighbor show command with the interface name.
- *ip [OPTION…] neighbor command*
- The following command will display the contents of the local ARP cache entries (IP addresses that have been resolved to MAC addresses accessible on the network) for the ens3 network interface: *ip neighbor show dev ens3*
- Using ip link
- The ip link command, introduced as part of the iproute2 tools, replaces the ifconfig command in a previous chapter.
- It is useful for network troubleshooting at the Data-Link (OSI Layer 2) level. The ip link command is used to display and manage network interfaces.
- *ip link { COMMAND | help }*
- The ip link command executed by itself will display all interfaces on the network and their state: *ip link*
- If you are trying to determine the status of previously configured interfaces, the ip link show command is one tool for doing so.
- To display information about a specific pre-configured interface, use the ip link show command followed by the interface name.
- Including the -br (–brief) option only prints basic information formatted in a tabular output that is easier to read.
- Using netcat
- The netcat utility is one of the more useful tools available for troubleshooting network issues
- It is a cross-platform tool; therefore, it can be used on Windows and Mac computers as well as Linux and has many features for monitoring and debugging network connections.
- Some uses are transferring data, acting as a network proxy, and scanning for open ports.
- *netcat [-options] hostname port[s] [ports] …*
- The netcat command can also be used in the short form, which is nc.
- To demonstrate using the netcat utility to find ports, the following example will scan for open ports on the local interface 192.168.1.2, using the netcat command with the -z option, which tells it to only scan for open ports without sending any data to them, as well as with the -v option for verbose output.
- The command below will scan ports 20 through 25 on the 192.168.1.2 interface: *netcat -z -v 192.168.1.2 20-25*
- The netcat command can also be used to create a communication socket between computers. Given two computers and the knowledge of one of their IP addresses, use the following commands to initiate a TCP session on port 23: *netcat -l 23*
- With knowledge of the IP address of the first computer, type the following on the second computer: *netcat 192.168.1.2 23*
- Now, anything typed on either computer will appear on both. Congratulations, you have just simulated a Telnet session!
- Troubleshooting Network Interfaces
- When troubleshooting a network interface, it is important to know how to verify network connectivity systematically. By using the Open Systems Interconnection (OSI) model as a reference, you will be able to test interface connectivity, network addressing, gateways, routing, DNS, and more.
The following sections will cover tools available for testing connectivity and modifying network interfaces.
Layer Purpose 7. Application User interface, Application Programming Interface (API) 6. Presentation Data representation, encryption 5. Session Controls connections between hosts, maintains ports and sessions 4. Transport Uses transmission protocols to transmit data (TCP, UDP) 3. Network Determines path of data, IP 2. Data Link Physical addressing (MAC), delivery of frames (Protocol Data Units (PDU)) 1. Physical Transmits raw data between physical media
- Physical Layer
- The first questions that an administrator would need to answer when determining network connectivity are: “Is the device on?”, “Is my network card detected?”, and “Is the network card connected?” An example of this would be a wireless switch on a laptop.
- No amount of Bash commands can turn the wireless switch on, but you can test for it by using the following command: *ip link*
- The highlighted NO-CARRIER message in the output above indicates that the interface is not connected to a network. In some cases, the output may not even contain the interface at all.
- In this case, the eth0 interface does not detect a carrier, but the wlan0 interface is working fine. This is typical of a laptop that has both interfaces available. The ifconfig and ip address commands will also display this information.
- It is possible that the device does not detect a network card or has not loaded a kernel driver for it. To verify that a network card is detected, use the lspci command: *lspci | grep Ethernet*
- In the event that the device does not have a PCI bus (Rasberry PiTM), or there is a USB to Ethernet converter installed, the *lsusb* and *lsmod* commands may be useful.
- Wireless devices may not show any network hardware using either *lspci* or *lsusb*. For instance, the Raspberry PiTM typically has the *rfkill* module inserted in the Linux kernel. The output of this module can be searched to determine the network driver, in this case, cfg80211:
- After verifying that hardware switches are on and your drivers are installed correctly, if your computer is still experiencing OSI layer 1 connectivity issues, and there is no carrier, check your power, wiring, or the other end of the connection.
- Although it is beyond the scope of this course, be aware that the *iwconfig* and *iwlist* commands can be used to determine wireless connectivity.
- Data-Link Layer
- The data-link layer of the OSI model defines interface to the physical layer. It also monitors and corrects for errors that may occur in the physical layer by using a frame check sequence (FCS).
- The next question that an administrator might ask is, “Does this computer see any devices on the network?” As part of its function between the physical and network layers, the data-link layer keeps a table of IP address to MAC address translations.
- This is called the address resolution protocol (ARP) table. The ip neighbor command displays a list of translations
- From the output above, an administrator can input the MAC addresses into the Wireshark™ organizational unique identifier (OUI) tool to determine the manufacturer of the network card:
- This information can be useful in determining if a particular device is found on the network.
- The *ethtool* command is also useful for determining connectivity at the data-link layer along with link connection speed, duplex, and other details.
- Network Layer
- The network layer of the OSI model performs network routing functions, defines logical addresses, and uses a hierarchical addressing scheme. Various protocols specify packet structure and processing used to carry data from host to host.
- A network administrator may ask such questions as: “Does this device have an IP address?” and “Is the gateway address set on this device?” Use the *ifconfig* or *ip address* command to determine the various addresses assigned to an interface.
- Routing Table Testing
- When checking network connectivity, ensure that your system can get to the assigned gateway. The network gateway, as defined in your network interface configuration, is the “first hop” or the first place your computer will go to when looking for resources beyond the local network.
- Use the ping command to determine connectivity to the gateway IP
- The gateway should be configured to route network traffic out from the local network and on to the next router, which can direct your communication towards its ultimate destination.
- The iproute2 suite of tools makes it possible to test the routing table with just a few commands. First, print the current list of routes available to the system with the *ip route show* command
- By default, the command above will print the main routing table; other routing tables can be displayed by using the table parameter. *ip route show* *default via 192.168.141.1 dev wlp2s0 proto dhcp metric 600* *169.254.0.0/16 dev wlp2s0 scope link metric 1000* *192.168.141.0/24 dev wlp2s0 proto kernel scope link src 192.168.141.187 metric 600*
- The above example shows the local interface, wlp2s0, as having IP address 192.168.141.187 and the default route is 192.168.141.1. By testing that our computer can use the default route 192.168.141.187 to get to the outside interface at 169.254.0.0/16, we can verify that the route from our local machine to the internet is working.
- The addresses to test are specified with the get, to, and from modifiers.
*ip route get to 169.254.0.0/16 from 192.168.141.187*
*169.254.0.0 from 192.168.141.187 dev wlp2s0 uid 1000* *cache*
- Reaching Other Networks
- Once connectivity to the gateway has been reached, use the *tracepath* or *traceroute* command to determine that a system can reach beyond the network using a well known IP address.
- Domain Name Service
- Beyond network connectivity, uniform resource locator (URL) addresses need to be resolved to IP addresses using DNS.
- Both the nslookup and host commands can be useful when determining if DNS is working properly.
- Firewalls
- Firewalls are beyond the scope of the curriculum; however, they can interfere with network connectivity. WindowsTM clients, in particular, do not respond to pings by default, based on their firewall settings.
- Linux uses IP tables to manage network traffic, which we will not cover in this curriculum. There is an easy to use tool that can be installed called Uncomplicated Firewall
- Once connectivity has been reestablished, to verify the status of the firewall, use the *ufw* command and make changes as needed: *ufw status*
- The gufw command is the graphical equivalent
- Details of the firewall logs can be found in the /var/log/ufw.log file.
- When using a Red Hat-based distribution, firewalld may be blocking access. Verify that it is running using systemctl: *systemctl status firewalld | grep Active*
- Starting and Stopping Network Interfaces
- The *ifup* and *ifdown* legacy commands are used to bring up and bring down a network interface, respectively.
- For example, assume there are two interfaces, eth0 and eth1, configured on a system. If a test run needs to be performed using eth0 in isolation, bring down the eth1 device by executing the following command as the root user: *ifdown eth1*
- To enable the eth1 device, execute the following command as the root user: *ifup eth1*
- It is also necessary to bring down a network device before assigning an IP address to the device. For example, if the IP address of the eth1 device has to be changed, then the steps to be followed are as follows:
- Bring down eth1 using the ifdown command.
- Use the ifconfig command to assign the new IP address to eth1.
- Use the ifconfig command to view the updated IP address.
- Bring up eth1 again using the ifup command.
- The ip command can also be used to turn the interfaces on and off:
- *ip link set enx000ec6a415ca down*
- *ip link show enx000ec6a415ca*
- *ip link set enx000ec6a415ca up*
- Deleting Network Interfaces
- The network interface can be temporarily disabled by using the ifdown command as follows: *ifdown eth1*
To make the change permanent, the configuration file for the corresponding interface should be deleted. For example, if the NIC for the eth1 interface has been removed from the system and installed in another system, then the network configuration on the original machine should reflect this change.
/Red Hat-derived/
On a Red Hat-derived system, the /etc/sysconfig/network-scripts/ifcfg-eth1 file should be moved to another directory or deleted, and the network service should be restarted using the following command: */etc/init.d/network restart*
/Debian-derived/
- On a Debian-derived system, the /etc/network/interfaces file should be updated, and any references to the eth1 interface should be commented out or removed.
- The following is a sample /etc/network/interfaces file including eth1 references: *auto lo #automatically activates lo* *iface lo inet loopback #lo with 127.0.0.1 address* *iface enp4s0eth0 inet dhcp #enp4s0eth0 with DHCP configuration* *iface eth1 inet dhcp #eth1 with DHCP configuration*
- This is a sample /etc/network/interfaces file with eth1 references commented out with the hash # character: *auto lo #automatically activates lo* *iface lo inet loopback #lo with 127.0.0.1 address* *iface enp4s0eth0 inet dhcp #enp4s0eth0 with DHCP configuration* #iface eth1 inet dhcp #eth1 with DHCP configuration
- Transport Layer
- The transport layer of the OSI model performs transparent transfer of data between end users. It is responsible for error recovery and flow control and ensures complete data transfer.
- After establishing full network connectivity, a network administrator may want to know if a service on their server is running and if it can be reached.
- To find out if the service is running, we can examine the open ports using the socket statistics ss command: *ss -tl4*
- Remaining Layers
- The session, presentation, and application layers of the OSI model are all handled by software.
- Once all other problems have been eliminated, a network administrator might want to examine various program settings. There could be non-standard port configurations in service configuration files or proxy settings could be incorrect for the system
- To get a good look at the inner workings of network communications, capture some data using *tcpdump* or Wireshark—both are beyond the scope of this curriculum.
2.2.21. Account Security
- Understanding SUID/SGID
- For Linux systems, the file is the most basic and important unit, making file-level permissions critical from the system security perspective. The file system stores the owner UID (User ID) and group GID (Group ID) of the file in the inode, a location in the file system that is used to store data associated with files.
- There are different types of UIDs supported by Linux to facilitate user management:
- *Real User ID:* The ID assigned by the system when a user logs in. All processes which are started by the user account will inherit the user’s real user ID. The real user ID can be displayed with the *id -u -r* command.
- *Effective User ID:* The ID used by the system to determine the level of access the current process has. The setuid permission and su command, both discussed below, change the effective user ID of a program, providing a means for a user to run a program or access a file as another user without having to log off and log in to another user account. The effective user ID is displayed with the id command.
- *Saved User ID:* A placeholder used for switching back and forth between real and effective UIDs. It is used when a program running with elevated privileges needs to do some unprivileged work temporarily; it changes its effective user ID from a privileged value (typically root) to some unprivileged one.
- Normally, when a user accesses or executes a file, the user’s real UID and GID are used to determine the level of access when executing the procedure.
- When an executable file (a program) has the SUID (Set User ID) (also referred to as setuid) permission, then the owner of the executable file becomes the Effective User ID to determine access and execute the procedure.
- Likewise, when a file has the SGID (Set Group ID) (also referred to as setgid) permission, the group owner identity of the file is used as the Effective Group ID to determine file access and execute the procedure.
- If a file has the setuid set on an executable file, the output of the ls -l command for the file will display an s in the user execute column.
- If the setuid was not set, a regular user would not be able to use the passwd command to write the new password to the /etc/shadow file since the file owner is root.
- The setuid allows, in this case, a regular user to temporarily become root so the new password can be written to the shadow file.
- Once the passwd command is finished executing, the effective user id changes from root back to the real user id of the user.
- Auditing SUID/SGID Files
- Since SUID/SGID files make a system vulnerable, the system administrator needs to keep track of these files. To find files which have the setuid bit set, execute the following command: *find / -type f -perm -4000 2>/dev/null | less*
- In the example above, the find command is used with the -type and -perm options to look for files with the setuid permission (in octal notation) set in the root (/) folder.
- To find files which have the setgid bit set, execute the following command: *find / -type f -perm -2000 2>/dev/null | less*
- f the administrator wants to locate either SUID or SGID files, then a logical OR condition (*-o*) can be added to the *find* command. If the administrator wants to differentiate the SUID or SGID type files, then the *-ls* output option can be added to show a detailed listing of the files: *find / -perm -4000 -o -perm -2000 -ls 2>/dev/null | less*
- Redirect the output of this find command to a file to create a baseline to audit changes in the SUID or SGID files. For example, after completing a fresh installation from verified installation media, the administrator could redirect the output of the list of files with setuid or setgid permission to the /root/special.perm (user-created) file by executing the following command: **find / -perm -4000 -o -perm -2000 -ls 2>/dev/null > ~/special.perm*
- In the future, you could run the find command again and compare the results to the /root/special.perm file to see if any new files have either the setuid or setgid bit set.
- *find / -perm -4000 -o -perm -2000 -ls 2>/dev/null > ~/current.perm; diff ~/special.perm ~/current.perm*
- Any output from the diff command would represent either a new file that was added with SUID or SGID permissions set, or an existing file that now has either setuid or setgid permission set that did not originally. Although it could represent an attempt at trying to compromise the system, the file could have also been recently added to the system through the valid installation of a software package.
- Configuring sudo
- The superuser do sudo utility allows a user to execute a single program or command as the root or another user without knowing their password or remaining logged in as that user, thus improving security. This utility is commonly-used to execute programs that require root privileges.
- The /etc/sudoers file is used to configure sudo and define security policy for its use.
- For example, you can enable specific commands for specific users and groups from specific computers and specify if a password is required.
- By default, the sudo command remembers the password for 15 minutes, allowing you to execute multiple commands with sudo in quick succession. After 15 minutes, the user is prompted for the password again.
- This time limit can be changed by the administrator as per the security policy by changing a setting in the /etc/sudoers file.
- The /etc/sudoers file should be edited using the visudo command as root or by using sudo and not a standard text editor. visudo is a special editor that validates the syntax of the file before saving the changes.
- In addition to customized settings, /etc/sudoers contains two types of entries:
Aliases: The configuration allows four types of aliases:
- UserAlias
- HostAlias
- RunasAlias
- CmndAlias
The entries in the file will look like the following:
*UserAlias OPERATORS = user1, user2, user3* *HostAlias DBNET = 172.16.0.0/255.255.224.0* *RunasAlias OP = root, operator* *CmndAlias EDITORS = /usr/bin/vim, /usr/bin/nano*
Alias Type Definition User Specifies groups of users. Usernames, system groups (prefixed by a percent % sign), and netgroups (prefixed by a plus + sign) can be specified Host Specifies a list of hostname, IP addresses, networks, and netgroups (prefixed with a plus + sign). Runas Similar to user aliases but accepts UIDs instead of username. Better for matching multiple user names and groups having different names but the same UID. Command Specifies a list of commands and directories. Specifying a directory will include all files within that directory but no subdirectories. - Specifications define which users can execute which programs; these look like the following:
*OPERATORS ALL=ALL*
*testuser1 DBNET=(ALL) ALL*
*testuser2 ALL= EDITORS*
- The sudo settings for the examples above are described below:
- The 1st line indicates that users who are part of the OPERATORS groups can execute any command. The first ALL indicates which machines the rule applies to, and the second ALL indicates which commands can be executed.
- The 2nd line indicates that testuser1 can run any command as any user on any host that is in the DBNET network.
- The 3rd line indicates that testuser2 can run the vim and nano editors as either the root user or any other user on the system.
- The sudo settings for the examples above are described below:
Some of the key options of the sudo command are:
Option Meaning -b Execute the command in background -u username Execute the command as the specified user instead of as the root user -k Invalidates the user’s cached credentials -v Update the user’s cached credentials -n Do not prompt the user for their password
- Understanding su
- The superuser su command is used to execute a shell with a different user identity.
- This command is typically used by a regular user to execute a command which otherwise needs root privileges or when the root user wants to execute a command as a regular user. For a regular user to use this command, the password for the other account must be entered.
Some of the key options of this command are:
Option Meaning - Start the new user’s login shell and execute the initialization (.rc) files providing an environment -l (i.e., variables, aliases, home directory, etc.) similar to what the user would expect had the user logged in directly. -c Pass a single command to the shell. As a result, after the su command has completed, the user will revert back to their original shell. command -m Do not reset the values of environment variables. If someone with knowledge of the root password needs to execute several commands with root privileges, they would use the su - command to switch identities to the root user and acquire the root account environment settings (the - option tells the shell to read the user’s initialization files) by executing the following command and providing the root user’s password: - Without specifying a username, the su - command defaults to the root user,
- To switch identities to the user1 account while remaining in the environment of the previous account, omit the -. This tells the shell to switch UIDs but don’t read the new user’s initialization files.
- Setting User Passwords
- Usernames are used to identify users on the system. The /etc/passwd file stores user account information, while the /etc/shadow file stores the encrypted password and information about aging that password (the time before they are forced to change it). To keep system passwords secure, only the root user can read and edit the /etc/shadow file.
- When a user logs in and enters their password, the system applies an encryption algorithm to what they entered and attempts to match it to the encrypted password in the /etc/shadow file. If the passwords match, the user is allowed access to the system.
- After a user account is created using the useradd command and the user’s password is set with the passwd command, the system administrator can enforce changing the password upon the user’s first login attempt by executing the chage (change password expiry) command: *chage -d 0 testuser1*
- Changing the password can also be enforced using the -e option with the passwd command: *passwd -e testuser1*
- The system administrator can lock a user’s password, preventing them from accessing their account by executing the passwd -l command or by executing the usermod command with the -L option. When the user logs on, the Authentication Failure message is displayed
- The account can be unlocked by assigning a password with the passwd command, executing the passwd -u command, or by executing the usermod command with the -U option
- Aging User Passwords
- Password aging is a security feature that allows a system administrator to require users to change their password after a specified period.
- When a new user account is created, the values in the /etc/login.defs and the /etc/default/useradd files will determine the aging constraints for that user’s password.
- Updating these files will not affect any existing users, but only new users created after the updates are made.
- Assessing Network Security
- The nmap (network mapper) command is an open source tool used by system administrators for auditing networks, security scanning, and finding open ports on host machines
- It is capable of scanning a host or the entire subnet to find open TCP and UDP ports. This tool is also used by attackers to find vulnerable ports.
- To avoid suspicion of using the tool to find a way to attack the systems on your network, it is recommended that you obtain authorization before using the nmap command.
- If the nmap command is executed without any options, then it will scan for open TCP ports and report the open ports along with the service running on them. For example, execute the following command: *nmap example.com*
This output mentions the PORT number, the protocol used, the STATE of the port, and the SERVICE using the port. The state of the port can be:
State Meaning open Application on the target host is listening for incoming packets on this port closed No applications are listening on this port filtered The nmap command cannot identify if the port is open or closed because a network-level firewall or similar filter is not allowing probes to this port unfiltered The nmap command can probe this port but does not have adequate information to conclude if it is open or closed - To scan for both TCP and UDP ports that may be open, the nmap command can be executed with both the -sT and -sU options: *nmap -sT -sU example.com*
- To check which hosts are available on a network, the nmap command provides an option that will effectively ping all the hosts on the given network. For example: *nmap -sP 192.168.1.3/24*
- In order to get an accurate assessment of the open ports of a system, it should be scanned with the nmap command from a different machine. Also, consider executing the nmap command from a system that resides on a different network to check router settings.
- The network status netstat command provides a wealth of information about network connections, interface statistics, routing tables, and many other details of the network configuration that have been discussed in previous chapters. It is designed to be run on the local system; it does not scan ports on remote systems.
- In assessing network security, the most useful options of the netstat command are those which show the network ports that are currently open and the state of the connections.
The following table shows the options of the netstat command, which are useful for viewing network ports:
Option Meaning -l Display sockets that are listening -a Display all sockets -n Don’t resolve host, port or user names -e Display extended information process -p Display PID and program name -t Display TCP sockets -u Display UDP sockets - To view all open ports and the processes which opened them, execute the following command: *netstat -lunpt*
- Processes Accessing Files
- To secure a system, it is essential to know which processes are running on the system and their interaction with other hosts on the network.
- The list open files *lsof* command is a useful utility for viewing files opened by active processes.
- The files included are: regular files, directories, block special files, character special files, libraries, and network sockets.
- To view all open files on the system, execute the following command (this will return a large amount of output): *lsof | more*
- To list all files opened by a particular user, execute the following command: *lsof -u sysadmin*
- To list all programs listening to ports, execute the following command: *lsof -i*
- To list all TCP connections, execute the following command: *lsof -i TCP*
- If the application wants to use a particular port (for example, port 25) and the error message that is shown is port already in use, check which process is currently using this port by executing the following command: *lsof -i TCP:25*
- To check all connections to and from a particular host, execute the following command: *lsof -i @127.0.0.1*
- The file user *fuser* command can also be used to display information about open files and sockets being accessed by processes. The fuser command is touted as easy to use compared to the lsof command; however, both commands have their advantages depending on what information is desired.
- For example, the lsof command is appropriate if a user needs to list all or multiple files being accessed, whereas the fuser command can be used to list processes accessing a specific file. The fuser command also has the ability to kill processes accessing a file using the -k option.
- To use the fuser command, specify a file or directory as an argument. Used without options, the fuser command will display only the PIDs of processes accessing the file: *fuser /*
- The output will have a lot of spaced numbers, every number is a PID
- To display additional information such as process names, use the verbose -v option. The command below will display all (-a) processes accessing the / directory: *fuser -av /*
- The access types under the ACCESS column include r for root directory and c for current directory.
Access codes that might be reported by the fuser command are:
Access Code Meaning c The process is using the mount point or a subdirectory as its current directory. e The process is an executable file that resides in the mount point structure. f The process has an open file from the mount point structure. F The process has an open file from the mount point structure that it is writing to. r The process is using the mount point as the root directory. m The process is a memory-mapped (mmap) file or shared library.
- Setting User Limits
- The user limit ulimit command is used to control resources that can be assigned by a user’s login shell and child processes spawned from the shell
- The system administrator may need to regulate the use of shared resources to prevent one process from using too much of a resource, preventing another process or user from having sufficient access to that resource.
- The ulimit command is used to set limits at the user level and these limits are applicable to all processes running for that user.
- There can be two types of limits; hard limits and soft limits.
- The hard limits are set by the root user, while the soft limits can be set by either the root user or a regular user can set their own soft limit. The main constraint is that soft limits cannot exceed hard limits.
- To view the currently set limits for a user’s account, you must be logged in as the user and execute the following command: *ulimit -a*
- The first line of the output indicates that the core file (a potentially large image of a process’s memory at the time it was terminated and used for debugging) cannot be created by this user.
- Suppose the user is part of the development team; creation of core files of unlimited size for testing purposes can be enabled, using the -c option: *ulimit -c unlimited*
- The ulimit command options are provided in parenthesis as part of the ulimit -a output
- For example, (-u) indicates the -u option is used to set the maximum number of concurrent processes the user can have.
- This is useful in preventing a hacker or any user from creating a fork bomb (creating new processes rapidly exhausting all system resources), which is a form of Denial of Service (Dos) attack.
- Based on the ulimit -a output above, the user can currently execute 1048576 processes
- To reduce this limit, execute the following command: *ulimit -u 512*
- While it is possible for the administrator to interactively set limits with the ulimit command, the /etc/security/limits.conf file can be used to set permanent limits for users that are enforced when the user logs in.
- Also, files that have a file name that matches the glob *.conf may be placed in the /etc/security/limits.d directory; these files can contain entries to make limits persistent.
- The syntax of entries for these configuration files follows the pattern: *<domain> <type> <item> <value>*
- The domain represents the users to which the limits apply. The following can be used as a domain:
- A username
- A group name prefixed with the @ symbol
- An asterisk * to apply to every user
- A UID range as minuid:maxuid, i.e. 500:1000
- A GID range as @mingid:maxgid, i.e., @1000:1500
- The type is the kind of limit that is being set. The limit type can be:
- -
- hard
- soft
- The item is the resource that is being restricted. The output of the ulimit -a command displays all the resources that can be restricted with limits.
- The value displayed is the hard (actual) limit. The following is a list of the items that can be restricted with limits:
- core: limits the core file size (KB)
- data: max data size (KB)
- fsize: maximum filesize (KB)
- memlock: max locked-in-memory address space (KB)
- nofile: max number of open files
- rss: max resident set size (KB)
- stack: max stack size (KB)
- cpu: max CPU time (MIN)
- nproc: max number of processes
- as: address space limit (KB)
- maxlogins: max number of logins for this user
- maxsyslogins: max number of logins on the system
- priority: the priority to run user process with
- locks: max number of file locks the user can hold
- sigpending: max number of pending signals
- msgqueue: max memory used by POSIX message queues (bytes)
- nice: max nice priority allowed to raise to, values: [-20, 19]
- rtprio: max realtime priority
- Viewing Current Users
- Once users are added, and their account parameters are configured, a system administrator can monitor when the users are logged into the system.
- The who command displays a list of users who are currently logged into the system, where they are logged in from, and when they logged in.
- Through the use of options, this command is also able to display information such as the current runlevel (a functional state of the computer) and the time that the system was booted.
- If the terminal name starts with tty (teletype), then this is an indication of a local login, as this is a regular command line terminal. If the terminal name starts with pts (pseudo terminal slave), then this indicates the user is using a remotely-connected terminal, possibly through SSH or Telnet, or running a process that acts as a terminal.
- After the date and time, some location information may appear. If the location information contains a hostname, domain name, or IP address, then the user has logged in remotely.
- If there is a colon and a number, then this indicates that they have performed a local graphical login. If the last column contains a double colon (::1), then a remote user is able to stream graphical applications using ssh -X. If no location information is shown in the last column, then this means the user logged in via a local command line process.
- There may be instances where more information about users, and what they are doing on the system, is needed.
- The w command provides a more detailed list about the users currently on the system than the who command.
- It also provides a summary of the system status.
- The first line of output from the w command shows the current time, how long the system has been running, the total number of users currently logged on, and the load on the system averaged over the last 1, 5, and 15 minute time periods.
- Load average is CPU utilization, where a value of 1 would mean full (100%) CPU usage during that period of time.
The following describes the rest of the output of the w command:
Column Example Description USER root The name of the user who is logged in. TTY tty2 Which terminal window the user is working in. FROM example.com Where the user logged in from. LOGIN@ 10:00 When the user logged in. IDLE 43:44 How long the user has been idle since the last command was executed. JCPU 0.01s The total cpu time (s=seconds) used by all processes (programs) run since login. PCPU 0.01s The total cpu time for the current process. WHAT -bash The current process that the user is running.
- Viewing Login History
- The *last* command reads the entire login history from the /var/log/wtmp file and displays all logins and reboot records by default.
- An interesting detail of the reboot records is that it displays the version of the Linux kernel that was booted instead of the login location.
- The /var/log/wtmp file keeps a log of all users who have logged in and out of the system.
- The last command is slightly different from the who and w commands.
- By default, it also shows the username, terminal, and login location, not just of the current login sessions, but previous sessions as well.
- Unlike the who and w commands, it displays the date and time the user logged into the system.
- If the user has logged off the system, then it will display the total time the user spent logged in; otherwise, it will display still logged in or still running.
2.2.22. Host Security
- Understanding xinetd
- A Linux system will have several services running at any point in time, many of which are network services.
- To avoid running these services the entire time, a master process listens for incoming TCP connections and then starts the corresponding process. In a nutshell, the master process acts as an intermediary between network services and ports.
- This process is known as the internet super-server or the inetd daemon. The inetd daemon was the original super-server; it has now been replaced by the extended internet super-server xinetd on many Linux distributions.
- The xinetd service is now used on all Linux systems. It provides all the services offered by inetd plus additional security features and access control according to different criteria such as system utilization, time of day, and client machines.
- The services managed by xinetd can be either single-threaded or multi-threaded.
- A single-threaded service managed by xinetd does not spawn new processes; it will process incoming requests in a sequential manner. A multi-threaded service spawns a new process to handle each new incoming request and is, therefore, able to service requests in parallel.
- To start, stop, or restart the xinetd daemon, execute the following command: */etc/init.d/xinetd [start, stop, restart]* (Or use systemd like a sane person)
- The xinetd daemon is configured by the /etc/xinetd.conf file or by creating separate files for each service in the /etc/xinetd.d directory.
The key fields of the configuration file are as follows:
Option Meaning service Name of the service as specified in the /etc/services file flags Sets attributes for the connection sockettype Set the network socket type wait Specifies whether the service is single-threaded or multi-threaded user Specifies the user group Specifies the group server Absolute path of the server program onlyfrom Hostname or IP address allowed access to the server noaccess Hostname or IP address not allowed access to the server accesstimes Specifies the time range to access a service in HH:MM-HH:MM format logonfailure Specifies the logging parameters disable Specifies if the service is disabled or not - For a complete list of options, refer to the xinetd.conf man page.
- For example, a snippet of the /etc/xinetd.d/telnet file is below: *service telnet* *{* *disable = no* *flags = REUSE* *sockettype = stream* *user = root* *wait = no* *server = /usr/sbin/in.telnetd* *onlyfrom = 127.0.0.1 192.168.12.0/24* *noaccess = 192.168.12.11* *logonfailure += USERID* *}*
To break down the contents of this example file:
Line Meaning disable = no Indicates that the telnet service is enabled. The double negative of disable = no means the service is, in fact, enabled. flags = REUSE Indicates that a port can be reused by telnet. sockettype = stream Indicates a TCP connection. user = root Indicates that the root user will be starting the telnet processes. wait = no Indicates that it is a multi-threaded process. server = /usr/sbin/in.telnet.d Indicates the absolute path of the telnet executable file. onlyfrom = 127.0.0.1 192.168.12.0/24 and Indicates that all hosts on the 192.168.12.0/24 subnet except 192.168.12.11 will be allowed telnet access. noaccess = 192.168.12.11 logonfailure += USERID Indicates that the USERID parameter should be used while logging along with the standard parameters defined in the /etc/xinetd.conf - If this file is changed, then the xinetd daemon should be restarted by an administrator
The key options for the xinetd daemon are as follows:
Option Meaning -f configfile Use the specified configuration file rather than the default file /etc/xinetd.conf -dontfork Used when xinetd should be created as a foreground process -limit proclimit Sets the threshold for the maximum number of processes that can be initiated by xinetd -filelog logfile Append log messages to the specified file - An important aspect of xinetd is that it can be used to activate sockets so that they are ready to receive requests from network services. The sockets are created by the xinetd daemon when needed or on-demand, once it receives a request for access.
- For a quick review of the relationship between sockets and ports, consider the following:
- A network socket is a communication endpoint between nodes (devices) on a network.
- Sockets use a socket address to receive incoming network traffic and forward that to a process on a machine or device.
- The socket address commonly consists of the IP address of the node it is attached to and a port number.
- systemd.socket units
- For systems running systemd, there is a systemd unit type that can be used to specify the details of an Inter-process Communication (IPC), network socket, or file system based FIFO.
- These .socket systemd units allow a socket to be activated.
- The systemd socket units are associated with a systemd service; however, not all systemd services need socket activation.
Below is an example systemd socket unit file: *[Unit]* *Description=OpenBSD Secure Shell server socket* *Before=ssh.service* *Conflicts=ssh.service* *ConditionPathExists=!/etc/ssh/sshdnottoberun*
*[Socket]* *ListenStream=22* *Accept=yes*
*[Install]* *WantedBy=sockets.target*
The following table breaks down the fields and settings in the example .socket systemd unit file above:
Option Meaning Description Description of the systemd socket unit. Before The unit needs to start before the specified unit. Conflicts The unit will conflict with the specified unit if it tries to start while the specified unit is running. ConditionPathExists Unit will not run if the specified path does not exist. Can add ! to the path to have unit not run if the path does exist. ListenStream Specifies what network port the systemd unit should listen on. Accept Specifies if the socket should accept network connections or not. WantedBy Specifies any systemd components that are needed to properly operate. MaxConnections Specifies the maximum number of connections to the systemd service unit. - The systemctl command can be used with the list-sockets command to display the systemd.socket units on a system: *systemctl list-sockets –all*
- To display the systemd.socket unit configuration files, use the systemctl list-unit-files command with the –type=socket option: *systemctl list-unit-files –type=socket*
- Note that although there is much discussion about systemd socket units being a replacement for xinetd, this is currently not the case.
- If it is the case that either xinetd or systemd sockets running on your system does not meet all the necessary requirements for your network configuration, a hybrid solution using both is possible.
- Configuring TCP Wrappers
- TCP Wrappers are a host-based access control system that extend the abilities of xinetd to provide an additional layer of security by defining which hosts can access wrapped network services.
- A wrapper is a network service which is accessed via a proxy or front end service.
- TCP Wrappers should be used in conjunction with a firewall and other security enhancements.
- Some of the common Linux applications that are compiled with tcpwrappers include xinetd, sendmail, and ssd (Secure Shell daemon).
- The tcpwrappers package is used to provide access control to network services.
- For a regular network service to use TCP wrappers, it must be compiled using the /usr/lib/libwrap.a library
- One method used to determine if a service is a TCP wrapped service is to use the strings command to display the plain text of a binary executable and search for the term hostsaccess
- If the term hostsaccess exists, the service is TCP wrapped.
- A second method to verify if a program is compiled with tcpwrappers is to execute the list dynamic dependencies ldd command: *ldd programname | grep libwrap*
- If the output of the ldd command contains libwrap, then the service supports TCP wrappers.
- The tcpwrapper library uses two files, the /etc/hosts.allow and /etc/hosts.deny files, to control access.
- These files contain rules that match service and hosts (or network) to either grant or deny access to the specified service.
- The hosts.allow file has precedence over the hosts.deny file as the rules in the hosts.allow file are parsed first.
- So, if a host is granted access to a service in the hosts.allow file and denied access in the hosts.deny file, access will be granted to the service.
- When a connection request is sent to a TCP wrapped service, hosts.allow and hosts.deny files are referenced to check if the host should be allowed to connect.
- If the connection is allowed, then tcpwrapper simply hands over control to the service for further processing.
- Otherwise, if the connection is denied, then processing will halt
- The tcpwrapper daemon uses the syslogd daemon for logging information to /var/log/messages.
- Note that if no rule matches from either file, access is granted. That means in order to deny access to a service, you must make a rule in the hosts.deny file.
- A typical rule will look like the last line in the example hosts.allow file below: *sendmail: all* *# /etc/hosts.allow: list of hosts that are allowed to access the system.* *# See the manual pages hostsaccess(5) and hostsoptions(5).* *#* *# Example: ALL: LOCAL @somenetgroup* *# ALL: .foobar.edu EXCEPT terminalserver.foobar.edu* *#* *# If you’re going to protect the portmapper use the name “rpcbind” for the* *# daemon name. See rpcbind(8) and rpc.mountd(8) for further information.* *#* *sshd: .netdevgroup.com*
- This last rule in the hosts.allow file specifies that connection requests for the SSH daemon originating from the netdevgroup.com domain should be allowed
- If the same rule is specified in the hosts.deny file, the connection requests will be denied based on the same criteria (unless a rule in the hosts.allow file permitted access).
- IP addresses and networks can also be specified in rules. For example, to allow a specific host and a couple of networks access to the SSH daemon, add the following rule to /etc/hosts.allow: *sshd: 192.168.1.10,192.168.0.0/24,192.168.2.0/255.255.255.0*
- If a system has multiple IP addresses, the service specification can specify the address to which the service is bound following the @ symbol.
- The following rule would apply to any inbound connection on the 192.168.0.254 network interface: *sshd@192.168.0.254: 192.168.0.0/24*
- The ALL keyword is used to allow/deny access to all hosts. For example, consider the following entry in hosts.deny: *sshd,vsftpd: ALL*
- This prevents sshd and ftp connections from any host (except when there are rules in the hosts.allow file that permit these connections).
- *Warning:* The use of the ALL keyword may result in blocking more access than intended.*
- The ALL keyword may also be used to represent the services and hosts. For example, an excellent way to configure the hosts.deny file is to have the following entry as the last rule of the file: *ALL: ALL*
- With this rule, all connections would be blocked except for those specifically permitted in the hosts.allow file.
The following table lists the additional keywords that may be used to specify the host:
Keyword Meaning LOCAL A host whose name does not contain a dot (period) character. These are hosts that are local to your network. UNKNOWN Matches any host whose name or address is unknown. Use carefully, as name resolution issues may cause names to be unknown. KNOWN Matches any hostname and address which can be resolved. PARANOID Matches any host whose name does not match its resolved address.
- Denying Access to Users
- The /etc/nologin file is used to prevent all users except root from logging on to the system.
- For example, the system administrator might want to undertake a maintenance activity such as installing a patch or upgrading the version of the database server. Many users, despite being instructed to not log in to the system, may still attempt to log in to the system.
- The existence of the /etc/nologin file is used in such cases to prevent other users from using the system.
- The system administrator can create the file by using a text editor or by executing a command similar to the following: *echo ’System down for maintenance until 2pm’ > /etc/nologin*
- If a user other than root tries to log in to the system, the contents of the /etc/nologin file are displayed on the user’s terminal and login is denied.
- Although not required, the system administrator should put an appropriate message in this file so that users understand why access to the system is being denied.
- All users should have read permissions to this file because the login command refers to this file if it exists.
- The /etc/nologin file should be deleted by the system administrator once the system is ready to be accessed by all users.
- The /etc/nologin file is used to temporarily prevent all users from starting new sessions on the system except root. The /etc/passwd and /etc/shadow files are used to selectively control which users can access the system.
- Understanding Init
- The init (initialization) process is the heart of the operating system; it is the first process started by the kernel; hence, it is given the PID (Process ID) of 1.
- The init process reads the /etc/inittab file that defines the system’s initialization process, including which services and programs start during bootup and the default runlevel of the system.
Runlevels are functional states of the operating system that define what features and services are available. The standard runlevels for Debian and Red Hat-based systems vary slightly, as indicated below:
Runlevel Debian-based Systems Red Hat-based Systems 0 Halt the system Halt the system 1 (or S) Single-user text mode (typically used for maintenance - similar to Windows Safe Mode) Single-user text mode 2 DEFAULT Not used (user-definable) Graphical multi-user mode plus networking 3 Same as 2, but not used DEFAULT Multi-user mode with a console (text-based) login, plus networking mode 4 Same as 2, but not used Not used (user-definable) 5 Same as 2, but not used Graphical multi-user mode (with an X-based login screen) 6 Reboot Reboot - To determine the current system runlevel, use the runlevel command: *runlevel*
- The first character is the previous runlevel. N indicates the system hasn’t switched runlevels since booting
- The 3 indicates the current runlevel .unknown would indicate the system is unable to determine the system runlevel.
- Since the /etc/inittab file is used when the system transitions from one runlevel to another in order to perform maintenance tasks or to reboot, it contains the procedure for entering a new runlevel.
- The system administrator can customize runlevels according to different parameters like network connectivity or X server operations.
- But the standard /etc/inittab file packaged with Linux is suitable for use without any changes in the majority of cases.
- The default runlevel for the system is defined by the following line in the /etc/inittab file: ’id:3:initdefault’ (without the ’ symbol)
- On Debian-based systems, the /etc/inittab file may not exist as modern Debian-based systems running systemd use targets rather than runlevels.
- The target translates as a runlevel for compatibility. If a system has reached the multi-user.target, then that will be translated to runlevel 3; if a system has reached the graphical.target, then that is translated to runlevel 5.
- To verify the default target, execute the following command: *systemctl get-default*
- The /etc/inittab file is part of the traditional UNIX System V initialization (SysVinit) process and is now used mainly by Linux systems to set the default runlevel.
- System and network services are now managed by other services such as systemd and upstart, which have superseded init.
- As both of these replacements feature some backward compatibility, they also can execute runlevel initialization scripts in the /etc/init.d directory described in the next section.
- Init Scripts
- The /etc/init.d directory contains two types of scripts:
- Scripts which are called directly by the init process
- Scripts which are called indirectly by the init process via the rc script, which is used while switching runlevels
- The scripts that are specific to each runlevel are present in the /etc/init.d directory.
- Symbolic links to these files are created in the /etc/init.d/rc0.d - /etc/init.d/rc6.d directories.
Each of these scripts understands the following parameters:
Option Meaning Start Start the service. Stop Stop the service. Restart Stop and start the service again if it is running already. If the service is not running, start it. Status Display the current status of the service. Reload Reload the service’s configuration file without restarting the service. - For example, to restart the networking service, execute the following command: */etc/init.d/networking restart*
- Generally, scripts for services such as networking, FTP, SSH, and Apache are kept in the /etc/init.d directory.
- The programs can be linked with different runlevels by creating symbolic links in the corresponding folders
- For example, the networking scripts are used at runlevel 3 and the symbolic links for these scripts are placed in the /etc/init.d/rc3.d directory.
- An administrator can switch from one runlevel to another using the init command without having to reboot the system. For example, to switch from runlevel 2 to 3, the process would be:
- The root user initiates the runlevel switch by executing init 3.
- The init process refers to the /etc/inittab file.
- The /etc/init.d/rc script will be run with parameter 3.
- The rc (run control) script will stop the appropriate services of the previous runlevel 2 and start the services of the new runlevel 3.
- The /etc/init.d directory contains two types of scripts:
2.2.23. Encryption
- Understanding OpenSSH
- The SSH protocol uses public key cryptography for authenticating the remote host and providing an encrypted channel.
- The implementation consists of three main components:
- SSH-TRANS: The transport layer protocol manages server authentication, privacy and data integrity. Generally, this layer runs over a TCP connection but can be used with any reliable connection stream.
- SSH-USERAUTH: The user authentication protocol runs on the transport layer and authenticates the client’s credentials to the server.
- SSH-CONNECT: The connection protocol runs above the user authentication protocol to multiplex the single encrypted channel into multiple logical channels.
- The OpenSSH suite is an open source package developed by the OpenBSD team, which can be downloaded from http://www.openssh.com.
- It provides programs such as sshd, ssh (secure sh command; replaces telnet), scp (secure cp command; replaces rcp) and sftp (secure ftp command; replaces ftp) for secure communication.
- Configuring OpenSSH Client
- The SSH configuration file, /etc/ssh/sshconfig, is used to configure the options for client programs such as ssh, sftp and scp and contains key-value pairs on each line.
The important keywords and their meanings are explained below:
Key Description Host Applies all forwarded declarations and options in the configuration file for those hosts that match one of the patterns given after the Host keyword ForwardAgent Specifies which connection authentication agent should be forwarded to the remote machine ForwardX11Trusted Specifies if X11 sessions should be automatically redirected to the remote machine RSAAuthentication Specifies if RSA authentication is to be used PasswordAuthentication Set to yes to use password based authentication; no otherwise BatchMode Specifies if username and password check on connection will be disabled. This option is generally used while invoking ssh from scripts to provide a non-interactive mode of operation. CheckHostIP Specifies if the IP address of the host should be checked for DNS spoofing StrictHostKeyChecking Specifies if new hosts should be automatically added by ssh to the .ssh/knownhosts file IdentityFile Specifies an alternate RSA authentication identity file to use Port Specifies the port number on which ssh connects to the remote host (default value is 22) Cipher Specifies the cipher method to be used for encryption - This is an example of a ssh config file: *Host **
*# ForwardAgent no* *# ForwardX11Trusted yes* *# RSAAuthentication yes* *# PasswordAuthentication yes* *# BatchMode no* *# CheckHostIP yes* *# StrictHostKeyChecking ask* *# IdentityFile ~/.ssh/idrsa* *# Port 22* *# Cipher 3des*
- The /etc/ssh/sshconfig file is the default file for all systemwide users using ssh services. However, an ssh configuration file in a user’s home directory ~/.ssh/config takes precedence over the systemwide configuration file.
Some of the other client configuration files are listed below:
File Purpose ~/.ssh/knownhosts List of servers, along with host keys accessed by the user ~/.ssh/authorizedkeys List of authorized public keys for servers, verified by the server to authenticate the user when the user attempts to login ~/.ssh/idrsa RSA private key of the user ~/.ssh/idrsa.pub RSA public key of the user ~/.ssh/iddsa DSA private key of the user ~/.ssh/iddsa.pub DSA public key of the user ~/.ssh/idecdsa ECDSA private key of the user ~/.ssh/idecdsa.pub ECDSA public key of the user ~/.ssh/idec25519 EC25519 private key of the user ~/.ssh/idec25519.pub EC25519 public key of the user
- Configuring SSHD
The /etc/ssh/sshdconfig file is used to configure the SSH daemon. This file also contains key value pairs on each line. A snippet of the etc/ssh/sshdconfig file is below: *# Package generated configuration file* *# See the sshdconfig(5) manpage for details*
*# What ports, IPs and protocols we listen for* *Port 22* *# Use these options to restrict which interfaces/protocols sshd will bind to* *#ListenAddress ::* *#ListenAddress 192.168.1.1* *Protocol 2* *# HostKeys for protocol version 2* *HostKey /etc/ssh/sshhostrsakey* *HostKey /etc/ssh/sshhostdsakey* *HostKey /etc/ssh/sshhostecdsakey* *HostKey /etc/ssh/sshhosted25519key*
*# Lifetime and size of ephemeral version 1 server key* *KeyRegenerationInterval 3600* *ServerKeyBits 1024*
*# Authentication:* *LoginGraceTime 120* *PermitRootLogin without-password* *RSAAuthentication yes*
*# To enable empty passwords, change to yes (NOT RECOMMENDED)* *PermitEmptyPasswords no*
*# Change to no to disable tunneled clear text passwords* *#PasswordAuthentication yes*
*X11Forwarding yes* *AllowUsers admin usertest user1* *AllowGroups admin dbas*
The keywords and their meanings are explained in the following table:
Keyword Meaning Port Specifies the port which sshd listens to for incoming connections; the default port is 22 ListenAddress Specifies the IP address on which the sshd server socket will bind HostKey Specifies where the private host key is stored KeyRegenerationInterval Specifies the time interval in seconds for the server to automatically regenerate its key ServerKeyBits Specifies the number of bits to be used by sshd for RSA key generation LoginGraceTime Specifies the time interval in seconds to wait for the user’s response before disconnecting the server PermitRootLogin Specifies if root login over SSH is permitted or not RSAAuthentication Specifies if RSA authentication can be used PermitEmptyPasswords Specifies if user logins to the server with empty password is allowed PasswordAuthentication Specifies if password based authentication must be used X11Forwarding Specifies whether X11 forwarding must be turned on or off. If GUI has been installed on the server, then this option can be enabled AllowUsers Specifies users who will be allowed access AllowGroups Specifies groups who will be allowed access Some of the other configuration files used by sshd are listed below:
File Purpose /etc/ssh/sshhostrsakey RSA private key used by sshd /etc/ssh/sshhostrsakey.pub RSA public key used by sshd /etc/ssh/sshhostdsakey DSA private key used by sshd /etc/ssh/sshhostdsakey.pub DSA public key used by sshd /etc/ssh/sshhostecdsakey ECDSA private key used by sshd /etc/ssh/sshhostecdsakey.pub ECDSA public key used by sshd /etc/ssh/sshhosted25519key ED25519 private key used by sshd /etc/ssh/sshhosted25519key.pub ED25519 public key used by sshd
- SSH Authentication and Keys
- SSH supports several different authentication methods:
- Public key authentication
- Host-based authentication
- Password authentication
- The public key authentication method is the most commonly-used SSH authentication method. It is implemented both on the server as well as the client side. To use this, a public-private key pair must be generated using a key-generation utility. RSA is the most commonly-used public key generation algorithm.
- If the user forgets the passphrase there is no option to recover it and the key will have to be replaced by a new key. The passphrase of the private key can be changed using the command *ssh-keygen –p*. Similar to password reset, this option will prompt the user for the old password once and the new password twice.
- The system administrator can select either RSA or DSA keys while configuring the SSH public key based authentication. DSA (Digital Signature Algorithm) is a US government standard defined for digital signatures while RSA is named after its creators, Ron Rivest, Adi Shamir and Leonard Adleman.
- DSA is faster while signing than RSA, but RSA is faster for verification. So the net time taken by both algorithms is comparable. If a 1024 bit encryption key is set up with DSA, the signature it generates will be smaller in size compared to the signature generated by RSA.
- The *ssh-keygen* command is used to generate and manage keys used by SSH; it uses the RSA algorithm by default. This program will prompt the user for the location to store the key (~/.ssh is the default) and the passphrase.
- In the example above, the keys are generated and stored in the /home/sysadmin/.ssh directory. The private key is stored in the idrsa file and the public key in the idrsa.pub file.
- The public key needs to be copied to the server that you want to securely login to and only has to be done once.
- The ssh-copy-id command is used to copy the public key to the server. In the example below, the public key is copied to the user1 account on the server named netdevgroup1: *ssh-copy-id user1@netdevgroup1*
- This command will add the contents of the client’s ~/.ssh/idrsa.pub file to the ~/.ssh/authorizedkeys file of user1 on the server netdevgroup1
- When the user connects via an ssh session to the server, the user’s public key from the ~/.ssh/authorizedkeys file will be used to encrypt data such that it can be decrypted only using the private key accessible to the user and no-one else.
- The next time the client logs on to the server, his public key is matched with the list of public keys on the server. If a match is found, then a signature is generated by the client using the private key. The signature is verified by the server using the public key, which is linked with the private key and the user is authenticated.
Some of the key options of the ssh-keygen command are:
Option Meaning -b numbits Specifies the number of bits for the key, the range for RSA keys is 768 – 2048 bits (default is 2048 bits) while DSA keys are exactly 1024 bits -F hostname Find the occurrence of the specified hostname in the knownhosts file -R hostname Deletes all keys for the specified hostname from the knownhosts file -f filename Specifies the file name for the key - Once SSH configuration has been verified on the client, password-based authentication can be completely disabled if the system administrator wants to have a password-less policy for maintaining security. The *passwd –l* command can be used to lock a user’s account, but key-based authentication will continue to work. This makes the system more secure as that user may only log in from authorized client machines.
- SSH supports several different authentication methods:
- SSH Host Based Authentication
- The host-based authentication model allows a host to authenticate on behalf of all or some users on that host.
- For example, if a team is working remotely at two locations, London and Tokyo, then the system administrator may want to configure SSH such that instead of maintaining key pairs for all 25 users in the London team accessing the Tokyo server, they will setup host-based authentication on the Tokyo server instead.
- The /etc/ssh/sshknownhosts file on the server must hold the public keys of all the hosts that need to be authenticated.
- The entry in this file implies that the host is trusted by the server and knows its public key. This file contains three fields for each record: hostname or IP address, key type and the public key itself.
- A sample record will look like: *122.110.17.32 ssh-rsa ABFFB3NzaC1yc2EABFFDAQABAAABAQC6XtOSGVEY9PUnMXS6vzvJigeQQtGYwdX2v2zAAsqwYRlaNN/ddV76btf4PL812r91WYGTgcXT0r0bfSGJ9dmJQ8dPenMAKyviR2BLV1SaIqxqUSjdkXFrlHkC7alILoKrwhMvNWb+Jaa3ecuYffKThNadFTHftyntdaVkYxwW7Hr1MknksfZKMPsJjW+Mp3aZVV2wVnQkOgkSsVY8y2pT7h7KuTa66IdqkwO2ZTEXL2D1X1wIEqGqAJ2VFPQayzclqaGbCzFUYyFsCT1WUL+BzRnehI9L9IVlP3katLSokoBzbxHeu0eb92VXngnrQJ1C0dA+5O4vp2KxFGEMuwdV*
- This line in the server’s /etc/ssh/sshknownhosts file indicates the server trusts the host specified by IP address 122.110.17.32.
- Also, the /etc/ssh/sshdconfig file must be updated to enable host-based authentication by setting the value to yes
- This will enable host-based authentication for all users of the host. To filter a subset of users, add the criteria to the /etc/ssh/sshdconfig: *Match Group dbadmin* *HostbasedAuthentication yes*
- SSH Client Utilities
- The openssh and openssh-clients packages must be installed on the client machine to connect to an OpenSSH server.
- The ssh command is the remote login client packaged by OpenSSH.
- The slogin executable is actually a symbolic link, which references /usr/bin/ssh
- It is a replacement for programs such as rsh and telnet for providing secure remote terminals. To login using ssh, execute the command: *ssh user1@netdevgroup1.com*
- OR
- *ssh -l user1 netdevgroup1.com*
- This will add the server to the client’s list of known hosts as seen in the last line of the output above.
- To execute only the single ls command without logging on, execute the command: *ssh -l user1 netdevgroup1.com “ls -l /usr/games”*
- To pass configuration options to ssh, execute the command: *ssh -o “Compression=yes” -l user1 netdevgroup1.com*
Some of the key options of the ssh command are as follows:
Option Meaning -F configfile Specifies the configuration file to be used Default file is /etc/ssh/sshconfig -i identityfile Specifies the file to read private key information Default files for SSH protocol version 2 are ~/.ssh/idrsa or ~/.ssh/iddsa -p portnum Specifies the port to connect to on the remote server -e escapechar Sets the escape character for the session The default is ~ - For secure copying of remote files over an encrypted channel, the scp command is used. For example, to copy all the files from the local archives directory to the user’s directory on the server, execute the command: scp / archives/* user1@pluto.netdevgroup.com:/archives/
- Understanding SSH Agent
- If the user’s private key is protected by a passphrase, then the passphrase needs to be entered by the user while invoking any program such as scp or ssh. This can be inconvenient if the user is creating multiple sessions or wants to use scp within scripts for copying some file from the user’s system to the server.
- The SSH agent is an application, which is used to cache the decrypted private key and provide it to SSH client programs when required. This effectively means the passphrase has to be entered only once by the user. Generally, the agent runs after the user logs in and maintains the cached information for the duration of the session.
- There are several SSH agents available for Linux. The OpenSSH package includes the ssh-agent program.
- The ssh-agent program runs as a daemon process and can be verified by executing the command: *ps -x |grep ssh-agent*
- If the ssh-agent program is not running with the current shell, then it can be started by a user executing the following command (note the backquotes ` surrounding ssh-agent) : *eval `ssh-agent`*
- OR
- *eval ssh-agent*
- The ssh-add command is used to add private keys to the agent’s repository.
- The agent will be running on the user’s terminal or desktop and authentication data is not shared with any other system over the network. The connection to the agent is forwarded to SSH remote logins and when any process needs to access the private key, the agent will service the request and return the result.
- The agent thus keeps the private key protected and also makes it convenient for the user to use SSH programs without entering the passphrase repeatedly.
- The SSH agent’s implementation creates a socket (communication implementation that allows a process to communicate with another on the same or a remote host) for every user that is accessible using the SSHAUTHSOCK environment variable. Also the agent’s PID is stored in the SSHAGENTPID environment variable.
- To automatically run the ssh-agent for all users, add the entry to start the agent to the /etc/profile file. Alternatively if the users start X sessions, then it can be started with each session as follows: *ssh-agent startx*
- To kill the agent’s instance, execute the command: *ssh-agent -k*
- ssh-add Utility
- The ssh-add utility is a helper program and is used to add RSA or DSA identities to the SSH agent’s repository.
- If no file is specified, the ssh-add utility will add the keys to the ~/.ssh/idrsa and ~/.ssh/iddsa files by default. If the identity file requires a passphrase then the user will be prompted to enter the passphrase.
- The ssh-add program works only if the ssh-agent process is running.
- The identity files should be readable only to the user, if they can be read by other users then it indicates possible incorrect configuration or some unauthorized access. The ssh-add program will not add such identity files.
- To add identity files, execute the command: *ssh-add*
- To view the fingerprints of the identities stored in the agent, execute the command: *ssh-add -l*
Some of the most useful options of the ssh-add command are as follows:
Option Meaning -d idfile Deletes the identity specified by the file from the agent -D Deletes all identities stored by the agent -x Locks the ssh-agent with a password This will restrict addition, deletion and listing of identity entries -X Unlocks the ssh-agent
- SSH Tunneling
- When a client connects to a host via programs such as telnet, ftp or ssh the socket created for communication on each side uses the IP address and the port number of the service.
- By default, TCP/IP is not a secure connection stream and is open to network attacks. SSH encapsulates the TCP/IP connections in a secure layer and thus creates a tunnel for communication.
- The data passing through the tunnel is encrypted as well as verified for integrity. As per the requirements of the users, multiple tunnels can be created. This feature is called SSH Tunneling or SSH Port Forwarding.
- To use this feature, the AllowTcpForwarding option in the SSH daemon’s configuration file must be set to yes.
- The port forwarding implementation maps the local port of the user with the remote port on the server and forwards all the network traffic bound for the local port to the remote port.
- For example, if the system administrator wants to protect the network traffic of users accessing sensitive Oracle data, they can setup and use SSH Tunneling.
- The host where the Oracle server is running should have the SSH server setup and the clients who are accessing the Oracle server instance should have the SSH client installed. The following steps should be followed:
- Add a data source in the /etc/odbc.ini file: *[ORACLESSH]* *Driver = ORACLE* *Database = //localhost:9102/mydb* *User = testdbuser* *Password = testdbpassword*
- Start the SSH daemon on the Oracle server.
- To setup port forwarding on the client machine, execute the command: *ssh -L 9102:testdbhost:1521 testdbhost*
- If the client accesses the database now, then all network traffic from port 9102 will be forwarded to port 1521 on the Oracle server.
- Consider another example of using the Oracle WebLogic admin console, which is by default accessible only on port 8586 on the server.
- This port is restricted for all other hosts other than localhost.
- Suppose due to an urgent issue, the WebLogic administrator has to access this console on a holiday. The SSH port forwarding feature is useful in such scenarios.
- To setup port forwarding, execute the command: *ssh -L 8586:localhost:8586 testuser@weblogicserver1*
- Now if the user opens an instance of the web browser and accesses the WebLogic console on port 8586, it will be accessible.
- The system administrator can select any port as long as it is not a privileged port and currently not in use by any other service.
- These two examples are based on local port forwarding. SSH also allows remote port forwarding, which is used for connecting the SSH server to another host where the connection is initiated by the server.
- For example, a team member has an Apache server setup and running on a laptop in their home office.
- If the development dev team at the office needs to access it for making a prototype urgently, then SSH remote port forwarding can be used to make this Apache instance accessible to all the team members.
- To give access to the Apache service on port 8000 to users at work, execute the following command on the home office laptop: *ssh devuser@dev.netdevgroup1.com -R 8000:192.168.1.12:8000*
- As long as the SSH tunnel exists, users will be able to access the Apache instance running on the user’s laptop.
- SSH is also capable of forwarding graphical applications over the network. To enable X11 forwarding, the /etc/ssh/sshdconfig file must contain the option: *X11Forwarding yes*
- To start a SSH session with X11 support, execute the command: *ssh -X pluto.netdevgroup1.com*
- To verify if X11 forwarding is working correctly, execute the command: *echo $DISPLAY*
- If localhost:10.0 is displayed, the configuration is correct.
- Now the user can execute any Windows-based application on the server as if it is a local application. For example, try opening the Firefox browser from the SSH prompt.
- Understanding GnuPG Keys
- GnuPG (GPG) is the open source implementation of the PGP (Pretty Good Privacy) standard, which is based on public-private key encryption. Linux uses these keys to verify the signatures of packages.
- For example, if a package that has been downloaded from the Internet is corrupted, then an error message will be shown to the user indicating that the package has a bad GPG signature.
- GPG encrypts and signs data and provides utilities for managing keys and accessing public key directories.
- Linux systems can install GnuPG via the gnupg package. Window-based systems can download GnuPG from http://www.gnupg.org.
- Using GPG
- The gpg command with the –gen-key option is used to create GPG keys. To generate a new key, execute the command: *gpg –gen-key*
- The gpg command operates in an interactive mode and the user will be prompted to provide options.
- The RSA and DSA methods are the same as those used in SSH encryption while Elgamal is another algorithm.
- The user will be prompted for other options as follows:
- The key size must be specified, RSA keys can be 1024-4096 bits long.
- The key validity must be specified in terms of number of days, weeks, months or years. The value 0 indicates that the key will never expire.
- The user name, email ID and comment must be specified. This is for linking the key with a user.
- A passphrase for protecting the key must be entered twice.
- The key generation process might take some time; it is directly related to how many processes are running on the system.
- The greater the number of processes running on the system, the more quickly GPG can generate a random set of keys.
- When the process is complete, an asymmetric public and private key pair is created.
- Whatever data is encrypted by one key, can be decrypted by the other
- In practice, the user will publish their public key, or give it to others with whom they want to communicate.
- Then, they will use their private key to encrypt data, so others can use the public key to decrypt it.
- The output from the process will include some information about the keys: *pub 2048R/950B76C6 201-10-29* *Key fingerprint = 50D6 24A7 C121 51EB 397B 1C92 A6EA 5A3D 97E3 667A* *uid Sysadmin (Linux Student) <sysadmin@example.com>* *sub 2048R/4BA1698A 2015-03-16*
- The “950B76C6” key is the public key identifier, which can be used to refer to this key.
- The –armor option ensures that the file contains only ASCII characters instead of binary.
- For example, to export this public key to a file, you can execute: *gpg –armor –output pubkeyfile –export 950B76C6*
- You can also export public keys by referring to the name that was entered for the key, like: *gpg –armor –output pubkeyfile –export ‘Linux Student’*
- To import the public key of the other user with whom the client wants to communicate, execute the command: *gpg –import pubkeyfile*
- Users can also upload their keys to public key servers, which host public keys from users across the globe. To upload a key to a key server, execute the command: *gpg –keyserver serverURL –send-keys 950B76C6*
- If you want to download public keys from a key server, then you can either search or directly download keys. To search for a key, you could execute: *gpg –search-keys sysadmin@example.com*
- If a match for sysadmin@example.com is found, then the user will be prompted to download it.
- If the public key identifier is known for a key, then the key can be downloaded with a command like: *gpg –recv-keys 950B76C6*
- To send something to a user securely, you can encrypt the data with that user’s public key, and then they will be able to decrypt it with their private key.
- For example, to send the file data.txt to the user sysadmin@example.com after you have received their public key, execute: *gpg –encrypt –recipient sysadmin@example.com data.txt*
- After executing the command above, an encrypted data.txt.gpg file will be created, which could be sent as an attachment to that recipient.
- When a user receives a file encrypted with their public key, they can use gpg to decrypt it.
- In fact, by default gpg will act as if the –decrypt option is given, if no options are used.
- When a file is decrypted, a file without the extra .gpg is created. For example, decrypting the data.txt.gpg file will create a data.txt file
- You can use your key to create digital signatures for others as well.
- The significance of a signature is that it authenticates your identity and links it with the signed item.
- For example, if you digitally sign a software package, then it means that the package has been verified and authenticated by you and is trustworthy.
- To sign a file with the user’s private key, execute the command: *gpg -a –output pkg.sig –detach-sig pkg*
- To verify the signature, the receiver can execute the command: *gpg –verify pkg.sig*
- The default configuration file used by gpg is ~/.gnupg/gpg.conf and is read at initialization.
- The gpg.conf file is automatically created the first time a key is generated with the gpg –gen-key command.
- gpg-agent
- To help make the use of GPG easier and more convenient, the gpg-agent daemon can cache the passphrase for the gpg keyfile
- This allows the passphrase to be used once and then cached for the determined amount of time.
- The configuration for gpg-agent is stored in the ~/.gnupg/gpg-agent.conf file: *default-cache-ttl 600* *max-cache-ttl 7200*
The following table describes the output of the example above:
Option Meaning default-cache-ttl Determines the number of seconds to cache the passphrase. The timer is reset each time the cache is accessed. max-cache-ttl Determines the maximum time a passphrase is cached. After the time has expired, the passphrase will be asked for when using gpg. - The gpg-agent can be started in daemon mode by using the –daemon option.
- This will allow gpg-agent to be started in the background and allow connections by gpg authentication requests.
- If the needed gpg directories and files are missing, they are created when gpg-agent is started the first time.