.. _shell_stuff: Useful Shell Stuff Primer ========================= The shell is the usual interface for a Unix system, because it is what is normally started when you log in. It is a user space program that runs with your permissions, to create an environment for you to work in. There is much more in either the bash or c-shell than this primer can possibly describe, so one of the following commands will always be your friend:: man bash man tcsh but this primer covers the basis from which you may grow. Access Permissions ------------------ * why Windoze prior to perhaps Windows 7 was insecure * simple access model * user/group/other, read/write/execute:: >>> ls -l >>> drwxr-xr-- blah, blah directoryname (directory bit set) >>> -rwxr-xr-- blah, blah filename (directory bit unset) >>> . directory bit >>> ... user permissions (read, write, execute in this case) >>> ... group permissions (read, execute in this case) >>> ... all others permissions (read only in this case) Three commands that govern access are chown, chgrp, and chmod. You should be comfortable with them, including the -R option. Regrettably, the gpfs file system we now use also overrides some of these behaviors with access control lists. Access to the access control lists is restricted to The Powers That Be, so some ch{own,grp,mod} things you should be able to do don't work on our file systems, and you can't find out why without an adsurbed(\*) amount of trouble (\*thank you, Walt Kelly). Globbing -------- Assume you are in a directory whose contents are:: NameB NameC NameC1 NameC2 Name{A,B,C?} expands to NameA NameB NameC1 NameC2 Name[AB] finds either or both of NameA or NameB only if they exist, so in this case, it will expand to NameB Name? finds names of with exactly one character after Name, so it will return NameB NameC Name\* finds anything beginning with Name, so it will return NameB NameC NameC1 NameC2 \* and ? can be embedded anywhere in the string. Name\*C will expand to NameC, but Name?C will not find anything. Pipelines --------- The vertical bar, `|`, is used to separate commands in a Unix pipeline. The standard output of the command on the left side is fed to the command on the right side. Multiple commands can be concatenated via vertical bars to create a pipeline of arbitrary length. There are no intermediate files, because each step consumes the data from the previous step without writing to permanent storage. Examples of shell pipelines helpful in SyQADA (Note, the syqada tools command now provides better information than that given by the following shell commands): Learn how many jobs have an error: >>> wc -l batchdir/LOGS/\*.err | grep -vw 0 Determine which jobs have a particular kind of error: >>> grep -c 'some phrase that occurs in a particular error message' batchdir/LOGS/\*.err | grep -v 0$ Learn how many jobs have that particular error: >>> grep -c 'some phrase that occurs in a particular error message' batchdir/LOGS/\*.err | grep -v 0$ | wc -l .. _looping: Looping Operators ----------------- See *man bash* for details, but the construct >>> for file in ; do command; done is extremely powerful. Before *syqada tools*, I frequently ran commands like this to see how many errors of a given type there were: >>> for file in batchdir/LOGS/\*.failed; do >>> echo $file:t:r ; tail -2 $file:t:r.err >>> done Event and Word Designation -------------------------- You can repeat the previous command with >>> !! You can repeat the previous command that began with `e` with >>> !e The shell will show you what it's running, so this might be a sequence of commands: >>> echo fred\* fred1 fred2 >>> ls fred3 ls: fred3: No such file or directory >>> !e echo fred fred1 fred2 You can run a new command on the last argument of the previous command with >>> new-command !:1 !$ As an example, on the next command line, oops I mistakenly rename a file. So on the following line, I rename it back without retyping: >>> mv file1 file2 >>> mv !$ !:1 Or consider >>> mv file1 file2 >>> ls file2 >>> mv !-2$ !-2:1 In which I only realized that I had badly named the file after doing the ls. I looked back 2 commands into the history to restore the name. The next line does exactly the same thing but pulls the command back by its initial character: >>> mv !m$ !m:1 Word designation allows you to pull apart variables and commands in remarkable ways. You saw one usage above in :ref:`looping`. There are four operators that work on command line arguments or on the variable in a for loop: * :h pulls off the directory portion of a path * :t pulls off the filename (basename) portion of a path * :r pulls off everything but the suffix * :e pulls off the suffix The following sequence illustrates a common sort of usage: >>> ls qada/BatchRunner.py qada/BatchRunner.py\* >>> echo !$:t:r.pyc echo BatchRunner.pyc BatchRunner.pyc There is endless power in combining command designations with word designations. Experiment. This allows you to repeat certain behaviors rapidly to find and fix problems much faster. You do have to learn to use caution, though, because you have to be sure what it is you are referring to when you delve into the past of your command history. About emacs ----------- My sysadmin friends always say, "Why would I use emacs? My computer already has an operating system." I, on the other hand, live in it. It is extremely convenient to view code in one window while running a shell in a second window and be able to cut-and-paste from one to the other using the keyboard only. The Coke-bottle keycodes are definitely arcane magic, but the power of constructively defining and calling macros by itself, plus the embedded shell, and the fact that it is (now) installed by default on any computer worth using, all make emacs worth knowing. About Unix ---------- Unix is an Operating System, that is, a program that manages access between the CPU and memory, storage, and I/O secondarily, because it allows access to I/O, it manages human access to the CPU. The operating principle of Unix is KISS: * simple "Monitor" based operating system * simple model for files * kernel/user programs: kernel/user space * CPU runs just as fast in user space as it does in the kernel