Shell config subfiles

Large shell startup scripts (.bashrc, .profile) over about fifty lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome to manage over time, and if you keep your dotfiles under version control it’s not terribly helpful to see large sets of commits just editing the one file when it could be more instructive if broken up into files by section.

Given that shell configuration is just shell code, we can apply the source builtin (or the . builtin for POSIX sh) to load several files at the end of a .bashrc, for example:

source ~/.bashrc.options
source ~/.bashrc.aliases
source ~/.bashrc.functions

This is a better approach, but it still binds us into using those filenames; we still have to edit the ~/.bashrc file if we want to rename them, or remove them, or add new ones.

Fortunately, UNIX-like systems have a common convention for this, the .d directory suffix, in which sections of configuration can be stored to be read by a main configuration file dynamically. In our case, we can create a new directory ~/.bashrc.d:

$ ls ~/.bashrc.d
options.bash
aliases.bash
functions.bash

With a slightly more advanced snippet at the end of ~/.bashrc, we can then load every file with the suffix .bash in this directory:

# Load any supplementary scripts
for config in "$HOME"/.bashrc.d/*.bash ; do
    source "$config"
done
unset -v config

Note that we unset the config variable after we’re done, otherwise it’ll be in the namespace of our shell where we don’t need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there’s at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference.

The same method can be applied with .profile to load all scripts with the suffix .sh in ~/.profile.d, if we want to write in POSIX sh, with some slightly different syntax:

# Load any supplementary scripts
for config in "$HOME"/.profile.d/*.sh ; do
    . "$config"
done
unset -v config

Another advantage of this method is that if you have your dotfiles under version control, you can arrange to add extra snippets on a per-machine basis unversioned, without having to update your .bashrc file.

Here’s my implementation of the above method, for both .bashrc and .profile:

Prompt directory shortening

The common default of some variant of \h:\w\$ for a Bash prompt PS1 string includes the \w escape character, so that the user’s current working directory appears in the prompt, but with $HOME shortened to a tilde:

tom@sanctum:~$
tom@sanctum:~/Documents$
tom@sanctum:/usr/local/nagios$

This is normally very helpful, particularly if you leave your shell for a time and forget where you are, though of course you can always call the pwd shell builtin. However it can get annoying for very deep directory hierarchies, particularly if you’re using a smaller terminal window:

tom@sanctum:/chroot/apache/usr/local/perl/app-library/lib/App/Library/Class:~$

If you’re using Bash version 4.0 or above (bash --version), you can save a bit of terminal space by setting the PROMPT_DIRTRIM variable for the shell. This limits the length of the tail end of the \w and \W expansions to that number of path elements:

tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$ PROMPT_DIRTRIM=3
tom@sanctum:.../App/Library/Class$

This is a good thing to include in your ~/.bashrc file if you often find yourself deep in directory trees where the upper end of the hierarchy isn’t of immediate interest to you. You can remove the effect again by unsetting the variable:

tom@sanctum:.../App/Library/Class$ unset PROMPT_DIRTRIM
tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$

Temporary files

With judicious use of tricks like pipes, redirects, and process substitution in modern shells, it’s very often possible to avoid using temporary files, doing everything inline and keeping them quite neat. However when manipulating a lot of data into various formats you do find yourself occasionally needing a temporary file, just to hold data temporarily.

A common way to deal with this is to create a temporary file in your home directory, with some arbitrary name, something like test or working:

$ ps -ef >~/test

If you want to save the information indefinitely for later use, this makes sense, although it would be better to give it a slightly more instructive name than just test.

If you really only needed the data temporarily, however, you’re much better to use the temporary files directory. This is usually /tmp, but for good practice’s sake it’s better to check the value of TMPDIR first, and only use /tmp as a default:

$ ps -ef >"${TMPDIR:-/tmp}"/test

This is getting better, but there is still a significant problem: there’s no built-in check that the test file doesn’t already exist, perhaps being used by some other user or program, particularly another running instance of the same script.

To that end, we have the mktemp program, which creates an empty temporary file in the appropriate directory for you without overwriting anything, and prints the filename it created. This allows you to use the file inline in both shell scripts and one-liners, and is much safer than specifying hardcoded paths:

$ mktemp
/tmp/tmp.yezXn0evDf
$ procsfile=$(mktemp)
$ printf '%s\n' "$procsfile"
/tmp/tmp.9rBjzWYaSU
$ ps -ef >"$procsfile"

If you’re going to create several such files for related purposes, you could also create a directory in which to put them using the -d option:

$ procsdir=$(mktemp -d)
$ printf '%s\n' "$procsdir"
/tmp/tmp.HMAhM2RBSO

On GNU/Linux systems, files of a sufficient age in TMPDIR are cleared on boot (controlled in /etc/default/rcS on Debian-derived systems, /etc/cron.daily/tmpwatch on Red Hat ones), making /tmp useful as a general scratchpad as well as for a kind of relatively reliable inter-process communication without cluttering up users’ home directories.

In some cases, there may be additional advantages in using /tmp for its designed purpose as some administrators choose to mount it as a tmpfs filesystem, so it operates in RAM and works very quickly. It’s also common practice to set the noexec flag on the mount to prevent malicious users from executing any code they manage to find or save in the directory.

Smarter directory navigation

Shell autocompletion of directory and file names makes navigating through your filesystem quite a bit quicker than if you had to type all of the paths in full, but there are a few other ways to make navigating through and between long pathnames on your filesystem a little bit easier.

cd

As a first note, typing cd with no arguments will take you straight back to your HOME directory; you don’t actually need to type cd ~ or cd $HOME in most cases.

cd ~username

You can move to any particular user’s HOME directory with the general form cd ~username.

cd -

To navigate to the directory you were in before the last cd call, you can use cd -. This will both move you back into that directory and echo the pathname it was on for you.

$ pwd
/home/tom
$ cd projects
$ pwd
/home/tom/projects
$ cd -
/home/tom

pushd/popd

If you are working mostly within one directory and need to change to another briefly for some reason, rather than using cd you can use pushd to move to that directory and add it onto the directory stack. You can move to further directories with pushd successively, and then when you’re done in each one, popd will take you a step backward again. Each time you call pushd or popd, it will print the current contents of the directory stack:

$ pushd projects
~/projects ~
$ pushd /usr/local/bin
/usr/local/bin ~/projects ~
$ popd
~/projects ~
$ popd
~

CDPATH

By default, you can move around relatively in the filesystem tree by typing cd dir for any subdirectory of your working directory, rather than its full path. You can generalise this to provide a list of directories to check successively for matching subdirectories:

$ export CDPATH=.:~

In this case, if I were to type cd projects, if there were no subdirectory of the working directory by the name of project but there was one called /home/tom/project, I would be sent there instead. This probably isn’t a good thing to set in any non-interactive terminal, but it can be a nice shortcut for interactive shells. Note that the first part of the list should always be ., for the working directory.

This has the side effect of printing the complete path to which you just moved on a line before your prompt if it’s used. I happen to find this quite helpful.

Tolerate typos

In an interactive shell, if you make a small mistake in typing a path for cd that means it doesn’t resolve to an actual directory, it’s generally safe to have the shell correct mistakes for you and move you into the directory you most likely intended:

shopt -s cdspell

This will correct things like dropped or swapped characters in the path you typed:

$ cd /home/tmo
/home/tom

Variables

If you know you’re going to be switching back and forth between several directories, it often makes sense to put them into variables for quick changing, or even just editing files directly without having to move around at all:

$ acnf=/usr/local/apache/conf
$ abin=/usr/local/apache/bin
$ sanc=/var/www/sanctum
$ vim "$acnd"/httpd.conf
$ vim "$sanc"/index.html
$ cd "$abin"
$ ./apachectl configtest
$ ./apachectl restart

This last one may seem like a very elementary trick, or something that you’d typically only do within a shell script. However, if you use consistent variable names or even put them into a startup file like .bashrc, it’s actually quite surprising how much time it can save you; you start to realise how much time you were spending typing out paths.