- a. Useful commands
- b. vi the only editor you need
- c. Command Prompt and IO redirection
- d. bash
- e. Filesystem
- f. Boot up, runlevels, system processes and shutdown
- g. Users & Groups
- h. File types
- i. Networking
- j. Automating tasks with cron
- k. rpm
- l. LVM
- m. RAID
- n. SELinux
History of Linux
Finnish student Linus Torvalds wanted UNIX power but couldn't afford anything but an i386 pc. Back in the dark ages of computing, the earliest i386 didn't have math processing on the cpu - it was an extra chip. Linus (He pronounces his name with a short "i" like tin not a long "i" like line) released his first version of the kernel onto USENET in 1991. Since the license on the code was such that anyone could look at, modify and reuse, it gathered attention from some very talented people who added more functionality and more hardware support and showed their work back to Linus via USENET. It's reasonably fair to say that the Linux kernel combined with the GNU tools exploded onto the computing scene worldwide. GNU had a good compiler (gcc) and a host of other very important tools (most of the ones listed below are GNU tools or children of GNU tools) but they didn't have a kernel.
It was a matchup better than peanut butter and chocolate
Early system hackers had to hand bootstrap their kernel to running mode. Then hand compile their tool chain into a working system. Soon people began to collect all of the available GNU/Linux code from USENET and put it on CD and sell them. At this time, CD burners didn't exist and CD drives were still well over $100. But a subscription to Walnut Creek Linux collection was only $25 per issue (4 times a year) and it came with first 3, then 4, then 6 CD's of GNU/Linux software.
Out of this primordial bit-fest, distributions arose. Slackware is one of the oldest still around. Originally released as floppy disk images and requiring at least 7 disks to get a working kernel and networking stack, it was a blessing to have. At this time, Microsoft didn't have a networking stack at all and was only usable (in a very loose sense) with third-party tools to provide all the missing pieces.
Other distributions arose and thrived, most notably Debian and RedHat, and CD burners mostly ended Walnut Creek subscriptions. Various groups in academia embraced Linux systems as a way to stretch thin budgets further by getting PC hardware to run UNIX applications with Linux. Astronomy and physics groups, math departments, and many disciplines in engineering rely on Linux systems for the vast bulk of their computation horsepower.
Currently, RedHat is the top player in the distributions for business needs. They support a community version called Fedora which is more cutting edge than their commercially support RHEL product. A spinoff of RedHat was Yellowdog which was basically Redhat for the PowerPC platform. Yellowdog enjoyed a moderate success statue amongst some of the scientific communities use of PowerPC hardware for number-crunching systems. Debian is still available for more platforms that anything else (Eleven platforms: alpha, amd64, arm, armel, hppa, i386, ia64, mips, mipsel, powerpc and sparc). A "more user friendly" version of Debian called Ubuntu is the current darling of new Linux users despite Fedora having a better track record of successful first-time installs and a wider array of supported hardware. Slackware is still in production and releases a new version 1-2 times per year. The most complete list of Linux distros is distrowatch.
Current estimates have Linux on the desktop in more places than Mac OSX just with Ubuntu alone. The Fedora distribution is estimated to have more users than Ubuntu. RedHat has shown solid sales increases since their company went public in 1998. The growth of Linux servers is faster than the growth of Microsoft servers and has been since 2005.
The Command Line, and Moving Around the System
If you haven't used the command line extensively before, it can be a little daunting. However, the command line in Linux is one of its most powerful features: almost every component in a Linux system can be manipulated fully only through this interface. In fact, a majority of Linux systems in the world have no graphical output at all, leaving the command line as sole method of administration.
The first step is to open a terminal on your desktop. (A terminal is a program that provides a shell, which is the combination of the prompt like
[LIN:plucas6@loki ~]$ and the ability to run commands input from the keyboard) Click on Applications, then Accessories, then Terminal.
When you first open the terminal, you are inside your home directory, which is the place on the system reserved for your files and data. You are free to manipulate everything inside of this directory (in contrast to the rest of the system, which requires administrator privileges). Now we will learn how to do this!
inside the terminal and press Enter.
ls stands for list, and its function is to provide information about the files and subdirectories in the current directory. You will find that almost every other command you type will be
ls when using the command line normally; it's like using your eyes when walking around to know your surroundings.
Now, type the following and press Enter:
echo is one of the simplest programs on your system. Its function is simply to print back out whatever is passed to it. In this case, you passed the string "Hello, World" (The double quotation marks mark the beginning and end of a string; they are not part of the string itself and hence not printed out), then redirected the output of
echo (which would be exactly
Hello, World) into a new file
foo with the output redirection operator,
> takes the output of one command (in this case,
echo) and writes it into a file, creating it if it doesn't already exist, and replacing its contents if it does. In our case, the file
foo should contain our
Hello, World string.
Let's verify it! Enter
into the shell. This command,
cat, from concatenate, outputs the entire contents of a file, in this case
foo. If all went well, you should now see "Hello, World" on the screen!
Now that we can create files, let's manipulate them. First, copy the file
ls and you should see a new file appear. Check out its contents with
cat. You may have also considered that you can do the same thing as
cat foo > bar. This works for single files, but fails where
cp can also copy full directories.
When creating files and directories, avoid using spaces in their names at all cost. The shell sees spaces as the separator between commands and parameters, so having spaces in file or directory names can cause unexpected behavior.
Of course after knowing how to create files, you'll realize you might want to delete them as well. In Linux, the command is
rm, for remove. Enter
to delete the original file. Run
ls and see what changed.
Now, move the remaining file
bar back to
ls to see the difference.
mv is more akin to renaming a file a directory than copying it and deleting the original, though the outcome is the same.
As in most operating systems, Linux has a hierarchical tree-like structure of files and directories. In Linux, if you have a file
Now that you can create and move around files, let's talk about directories. Directory creation is done with the
mkdir command, and removal is done with
to create a directory
ls, then move the file
foo into it with
/ is actually unnecessary —
mv foo foos would work just fine — but it has the added benefit of failing if
foos does not exist or is not a directory. If it is not a directory, you would overwrite the file that existed with
Now that the new directory exists, move into it with the
cd for change directory is the final tool for getting around in the system. It's used straightforwardly if you want to move into a directory: just type
to do so now. Moving back up in the directory hierarchy is not so straightforward. Linux systems put two "fake" files in each directory,
. (a single period) and
.. (two periods). These are pseudodirectories:
. points to the directory it is in (more on that later), and
.. points to the parent directory. So run
and you will be back in your home directory.
A final note about running commands in the shell is that many programs (almost all of them) can take options that modify their execution. For example
ls -l performs the same basic task as
ls alone, but it gives you a lot more information about the files and directories in your working directory.
There are many behind the scenes environment aspects that Linux systems use to track things. Most of these are set up by the system when a user logs in. Some of these are the home directory ($HOME), the users login name ($USER), and what search order is used to run command from ($PATH). Others are things like aliased command (shortcuts like ll are actually aliases to ls -l - more on that later). There is a command
env that will show many of the current environment variables and what their values are.
env can also be used to setup custom environment variables for a specific command that is run at the same time. An example of when to use this would be when testing an application against different versions of Java. By changing the JAVA_HOME variable to a non-default one the application can be tested against other versions of JAVA easily.
will run the application
myapp using the java 1.4 found in the users home directory/java-1.4 directory.
The environment variable are set by several process during login. There is a default process called bashrc that sets many of the these values. There is a user level version as well in a hidden file called .bash_profile. A second hidden file, .bashrc is used to set up personal aliases.
In the default RHEL environment, the way to set a custom variable is with the bash
set command. To remove it use the
unset bash command.
- 1. Dangerous Commands or procedures
- 2. basic everyday commands
- 3. ssh, scp and rsync
- 4. grep
- 5. sed
- 6. awk
- 7. lsof
- 8. netstat
- 9. screen
- 10. chkconfig
- 11. mount
- 12. help
- 13. diff and patch
vi is a text editor. It will take a bit of time to learn how to use it effectively but that will pay off greatly.
Editing files with emacs
It seems there is a long running holy war about which editor is better. Many programmers prefer emacs. Most sysadmins prefer vi.
see LIN:vi first
But if you need an editor that will read you your email using a synthesized voice, emacs is a good choice.
bash is the default shell environment for RHEL.
Linux uses three main types of files
- regular - text files, directories, application binaries, etc
- symlinks - soft and hard links point to the real files but are different in use
- sockets and named pipes - used primarily for communication methods. sockets are external, named pipes of internal to the system.
Linux systems were designed around networking as were the original UNIX systems.
cron is a way to have code run at a certain time on a regular basis. Log files get rotated daily due to a system-wide cron process. Each user has their own cron process.
RPM is the core package management tool for RHEL. YUM is the tool used for updates and manual installs.
LVM is a way to manage storage space so the filesystem can grow as needed. In general, once the filesystem is established and all drives mounted, making changes is very difficult if a partition runs out of room.
Redundant Array of Inexpensive Disks (or Really Awful In Datacenter) is a process to provide larger and/or more redundant and/or faster data storage than single drives alone.
mdadm is the manager tool for Linux Software RAID.
SELinux is the bread and butter security model around here. semanage is the central command that is used to fully manage the entire selinux policy and contexts.
ls -Zwill show the SELinux context of a file or directory.