These are my stupid unix tricks. I hope that they are useful to you.
I use Mac OS X (pron: “ten”). If you don’t, you might want to switch instances of
~/Library/ to something else, like
Before we begin, first note that
bashrc refers to something that runs in each and every new shell, and
profile refers to something that runs only in login shells (spawned by your terminal, not just a shell script, for example). They aren’t the same and you don’t want them to be the same.
A lot of software or configurations want to run some stuff each new shell. You want to add aliases and functions to your shell environment. Manually editing
~/.bashrc is a drag, as is grepping it to determine programmatically if it’s been modified in a specific way.
mkdir -p ~/Library/bashrc.d ~/Library/profile.d touch ~/Library/bashrc.d/000.keep.sh touch ~/Library/profile.d/000.keep.sh cat > ~/.bashrc <<'EOF' # do not edit this file. put files in the dir below. for FN in $HOME/Library/bashrc.d/*.sh ; do source "$FN" done EOF cat > ~/.profile <<'EOF' # do not edit this file. put files in the dir below. source ~/.bashrc for FN in $HOME/Library/profile.d/*.sh ; do source "$FN" done EOF
Now you can use standard tools like
if [[ -e $HOME/Library/bashrc.d/111.whatever.sh ]]; to add/test/remove things from your shell environment.
Here are some of mine:
sneak@pris:~/Library/bashrc.d$ grep . 100* 100.caskroom-dest.sh:export HOMEBREW_CASK_OPTS="--appdir=$HOME/Applications" 100.gopath.sh:export GOPATH="$HOME/Library/Go" 100.homebrew-no-spyware.sh:export HOMEBREW_NO_ANALYTICS=1 100.homebrew-paths.sh:export PATH+=":$HOME/Library/Homebrew/bin" 100.homebrew-paths.sh:export PATH+=":$HOME/Library/Homebrew/sbin" 100.localbin.sh:export PATH+=":$HOME/Library/Local/bin" 100.localbin.sh:export PATH+=":$HOME/Library/Local/sbin" 100.yarnpaths.sh:export PATH+=":$HOME/.yarn/bin" # homebrew's yarn installs to here
Prefix them with numbers so that they sort and run in order; e.g. you want your bin paths (for python, yarn, et c) added to your
$PATH before you start trying to run things from within them.
Bonus points if you synchronize a directory (e.g. via dropbox/gdrive, or, better yet, via
syncthing like I do). My
~/.bashrc actually contains:
# do not edit this file. put files in the dir below. for FN in $HOME/Library/bashrc.d/*.sh ; do source "$FN" done for FN in $HOME/Documents/sync/bashrc.d/*.sh ; do source "$FN" done
This way I can add aliases and environment variables in
~/Documents/sync/bashrc.d/ and they magically appear on all of my machines without additional configuration.
Wrap your startup script commands in checks to prevent errors if things aren’t installed or available, e.g.:
Don’t try to install things with
brew is not installed:
if which brew >/dev/null 2>&1 ; then brew install jq fi
GOPATH only if the directory exists on the machine in question:
if [[ -d "$HOME/dev/go" ]]; then export GOPATH="$HOME/dev/go" fi
This way you can put things that need to happen into startup scripts (e.g. installation of jq for subsequent commands in the script to work) and they won’t error out if files or directories aren’t installed yet.
Another example (from
This loads bash completion for kubectl, but only on systems that have kubectl installed and in the
if which kubectl >/dev/null 2>&1 ; then source <(kubectl completion bash) fi
An ssh key on disk, even with a passphrase, is vulnerable to malware (malware can steal your files, and keylog your passphrase to decrypt them). Put your ssh private keys somewhere that software (any software, even your own) on your computer simply cannot access them.
The best way to store SSH private keys is in a hardware security module, or HSM.
I use a Yubikey 4C Nano (though the Yubikey 5C Nano is current now), via
gpg-agent. I have one physically installed in each computer I regularly use, plus a few spares stashed in safe places offsite. I generated the keys on the devices, did not back them up at generation time (so now they can’t ever be exported from the devices at all), and each device has its own unique key. (They have the added benefit of serving as U2F tokens for web authentication, something you absolutely should be using everywhere you can.)
gpg-agent is a small daemon that is part of GnuPG that runs locally and allows you to use a GnuPG key as an SSH key. GnuPG supports using a smartcard as a GnuPG key. Yubikeys can serve as GnuPG-compatible
CCID smartcards. This means that your Yubikey, in
CCID mode, can be used to authenticate to SSH servers.
To initialize a key on the card, use the instructions found at this guide. I do not recommend setting an expiration (as they suggest) and don’t put your real name/email on the GnuPG keys it generates, as these are not going to be used for normal GnuPG-style things.
The author of that tutorial has a slightly different (perhaps better) take on what to put in your
.bashrc to use the card for ssh. Mine is below.
Set it up to use your GPG smartcard to authenticate to remote hosts by dropping a file in your modular
cat > ~/Library/profile.d/900.gpg-agent.sh <<'EOF' # check for existing running agent info if [[ -e $HOME/.gpg-agent-info ]]; then source $HOME/.gpg-agent-info export GPG_AGENT_INFO SSH_AUTH_SOCK SSH_AGENT_PID fi # test existing agent, remove info file if not working ssh-add -L 2>/dev/null >/dev/null || rm -f $HOME/.gpg-agent-info # if no info file, start up potentially-new, working agent if [[ ! -e $HOME/.gpg-agent-info ]]; then if which gpg-agent >/dev/null 2>&1 ; then gpg-agent \ --enable-ssh-support \ --daemon \ --pinentry-program $(brew --prefix)/bin/pinentry-mac \ 2> /dev/null > $HOME/.gpg-agent-info fi fi # load up new agent info if [[ -e $HOME/.gpg-agent-info ]]; then source $HOME/.gpg-agent-info export GPG_AGENT_INFO SSH_AUTH_SOCK SSH_AGENT_PID fi EOF
Once you have generated a key on your card and started the
ssh-add -L | grep cardno will show you the ssh public key from the key on your Yubikey, e.g.:
sneak@pris:~$ ssh-add -L | grep cardno ssh-rsa AAAAB3NzaC1yc2EAAAA....VCBZawcIANQ== cardno:000601231234 sneak@pris:~$
Save the GnuPG public keys from all the cards and export them as a single ascii-armored bundle (
gpg -a --export $KEYIDS > keys.txt) which you save somewhere easily accessible. You can then use a tool like mine to easily encrypt data for safekeeping that you will always be able to decrypt should you have at least one of your HSMs around.
Two options, github or self-hosted. The github option has some drawbacks, but is fine for most people (other than the fact that you should not be using GitHub at all, for anything).
Add all of your ssh public keys from your various HSMs (you should have one for each computer you type on) to your GitHub account. GitHub publishes everyone’s public keys at
https://github.com/username.keys (here’s mine).
Then, on new systems, simply paste this line (substituting your own username, of course):
mkdir -p ~/.ssh curl -sf https://github.com/sneak.keys > ~/.ssh/authorized_keys
You may be tempted to crontab this like I was, so that the keys on all of your machines are automatically updated on adds/removes to the master list. If you do so, you give anyone who controls the
github.com domain the ability to add ssh keys to your machines automatically. You may or may not be okay with this—I am not, considering Microsoft (GitHub’s owner) is a giant military defense contractor and eager partner in the US military’s illegal bulk surveillance programs.
Note: If you do end up running it from cron, be sure to check the return value of
curl before replacing the file (i.e. don’t use the line above unmodified), because then if the network is down when cron runs it will clobber your file and not refill it, rendering your
authorized_keys file empty.
Personally, I like to have ssh keys that have access to GitHub (non HSM keys, such as on my Pixelbook which sadly doesn’t support Linux USB passthrough for the Yubikey smartcard) that don’t also have root on my machines, so I maintain a separate list on my own website:
Then, on new boxes, I just paste the following:
mkdir .ssh cd .ssh mv authorized_keys authorized_keys.orig wget https://sneak.cloud/authorized_keys
Docker desktop for mac is closed source software, which is dumb for something that asks for administrator permissions on your local machine. This lameness aside, it runs the docker daemon (not the command line client) inside a linux VM which it runs on your local machine, which is probably a relatively slow laptop on a not-great internet connection.
I have many big, fast computers on 1 or 10 gigabit connections that I can use via SSH that are better for building docker images or testing Dockerfiles (I do all of my editing and
giting and suchlike on my local workstation, because my signing keys and editor config are all here).
cat > ~/Library/bashrc.d/999.remote-docker.sh <<'EOF' alias ber1.docker="/bin/rm -f $TMPDIR/docker.sock; ssh -nNT -L $TMPDIR/docker.sock:/var/run/docker.sock email@example.com ; export DOCKER_HOST=unix://$TMPDIR/docker.sock" EOF
The preceding uses ssh to forward a local unix socket (a type of special file) to a remote server (in this example,
ber1.example.com), with which your local
docker command can use to talk to a remote docker server (via locating the socket in the
DOCKER_HOST environment variable). You’ll want to change the
ber1.docker part to whatever you want the command to be to enable your remote docker, and the
firstname.lastname@example.org part to the username and hostname of the remote machine you wish to use. (It needs to be running ssh and docker already.)
Once you run one of those aliases (they have to be aliases instead of scripts because they need to modify the environment of your existing, running shell) you should be able to use all normal docker commands (e.g.
docker build -t username/image /path/to/dir) just as if you were on the docker host itself. This makes it pretty simple to do a
docker build . in a directory in which you’ve been hacking, but leveraging all of the power of a big, fast machine in a datacenter. You’ve never seen
apt update in your Dockerfile go so fast.
Security warning: anyone who can read and write from the local socket on your workstation (probably just you, but worth mentioning) has root on the remote server, as API access to a remote docker daemon is equivalent, from a security and practical standpoint, to root on the docker host itself.
export DOCKER_HOST=ssh://email@example.com docker ps
or, to persist:
echo 'export DOCKER_HOST=ssh://firstname.lastname@example.org' > ~/Library/bashrc.d/999.remote-docker.sh
I have a git repository called
hacks into which I commit any non-secret code, scripts, snippets, or supporting tooling that isn’t big or important or generic enough to warrant its own repo. This is a good way to get all the little junk you work on up onto a website without creating a billion repositories.
You might use gmail and access it with a mail client via IMAP. Use offlineimap to periodically back it up to files in a local maildir-format directory, so that if your Google account should evaporate through no fault of your own, you don’t lose decades of email. (Sync it to other machines via
syncthing to avoid losing data via disk crash or hardware theft, or put it somewhere that your local workstation backups will pick it up.)
I back up my local machine to a remote (encrypted disk) server via SSH using rsync via this script. It looks at the environment variable
$BACKUPDEST to figure out where to do the backup.
For a remote ssh backup, do:
echo 'export BACKUPDEST="email@example.com:/path/to/destination"' > ~/Library/bashrc.d/999.backup-destination.sh
For a local drive:
echo 'export BACKUPDEST="/Volumes/externaldisk/mybackup"' > ~/Library/bashrc.d/999.backup-destination.sh
Then use the above script. Copy it to your local machine and edit the backup exclusions as required for your use case.
If you trust other companies with your data and want something more user-friendly, check out BackBlaze, as they’re cheap and excellent and offer unlimited storage.
(I also use the macOS built-in backup called Time Machine to back up to an external USB drive periodically, but I don’t trust it.
syncthing is my first-line defense against data loss, my
rsync backups are my second, and the Time Machine backups are just a safety net.)
I have a
Makefile in my home directory (really just a symlink to
~/dev/hacks/homedir.makefile/Makefile; it officially lives in my
hacks repository) that I use to store common tasks related to my local machine (many of which are somewhat out of date, I note now on review).
The one I use most often, though, is
make clean, which takes everything in
~/Desktop and moves it into
~/Documents/$YYYYMM (creating the month directory in the process if it doesn’t exist), and also empties trashes. This alone is worth the price of admission to me.
I should prune it of old/outdated commands and update it for my current/latest backup configuration. In my ideal world,
make in my home directory would empty trashes, clean up the desktop, download/mirror all my IMAP email to local directories, then run a full backup to a remote host.
I have a symlink,
~/dev, in my home directory, that points to a subdirectory of my synced folder,
~/Documents/sync, into which I check out any code I’m working on. I rarely edit code outside of
~/dev. This way, even if I don’t remember to commit and push, my current working copies are synced shortly after save to several other machines. I wouldn’t lose much if you threw any of my machines in the river at any time.
Your ssh client config lives at
The user ssh client config file is amazing, and you should be using it extensively.
ssh_config(5) has more info (run
man 5 ssh_config).
Here’s the basic format:
Host specific.example.com SpecificHostnameParameter Host *.example.com ExampleDotComParameter ExampleDotComParameter Host * GlobalParameter GlobalParameter
In this way, you can specify new default settings for all ssh commands, and then override them on a specific wildcard/host basis.
e.g. to always ssh as root:
Host * User root
Note that (I’m told) ssh will read each setting in the order it is found in the file, without later items being allowed to override previous ones, so specify them from most-specific to most-generic (putting the
Host * at the end), allowing host- or domain-specific items to come before your defaults.
In the following example,
~/Documents/sync is a synced directory that replicates automatically across all my workstations using
syncthing. (You should use syncthing.) You could also use Google Drive or Dropbox if you want to give third parties that much control over your machine, or knowledge of your hostnames/habits.
mkdir -p ~/Documents/sync/dotfiles mv ~/.ssh/config ~/Documents/sync/dotfiles/ssh_config ln -s ~/Documents/sync/dotfiles/ssh_config ~/.ssh/config
On the other machines, just:
rm ~/.ssh/config ln -s ~/Documents/sync/dotfiles/ssh_config ~/.ssh/config
Now, settings changes for ssh automatically propagate to all workstations.
You could do the same for your
known_hosts file to sync host key fingerprints between all of your machines, too, but I don’t bother, as I find TOFU sufficient.
Put the following in your
Host * section:
Host * Cipher aes128-ctr
It’s my understanding that the counter mode is more efficient on modern, multicore CPUs, as it is easier to parallelize.
Put the following in your
Host * section:
Host * ControlPath ~/.ssh/%C.sock ControlMaster auto ControlPersist 10m
Make sure you use %C (hash of username+hostname) as the filename token instead of %h (hostname) or whatever other stuff other tutorials on the internet told you, I ran into issues using the other format, whereas this uses just [a-f0-9] in the first part of the filename.
This will maintain a connection to each host you ssh into for 10 minutes after idle. Any future ssh connections while the first is open (or within that 10 minute window) will re-use the existing TCP connection, which speeds things up a lot.
Security notice: anyone who can write to these socket files (probably just you) has full access to the hosts to which they are connected.
Have some machines that aren’t in DNS, or have stupid hostnames that you can’t remember? Using IPs is a terrible smell that you should always avoid. Rewrite them by overriding their connection hostname:
Host workbox.example.com HostName 220.127.116.11 Host otherbox Port 11022 User ec2_user HostName real-hostname-is-long-and-dumb.clients.hostingprovider.su
ssh otherbox. Sure beats
ssh -p 11022 firstname.lastname@example.org!
In this way, your ssh config file functions as a sort of local dns database.
You can use the
ProxyCommand directive to tell ssh how to get i/o to a remote ssh service, skipping the whole direct TCP connection process entirely. You can use this for connecting transparently via a bastion host, e.g.:
Host *.internal.corplan ProxyCommand ssh email@example.com nc %h %p
Using the preceding will result in
ssh box1.internal.corplan sshing as
bastionhost.example.com and running netcat with
nc box1.internal.corplan 22 (
%p are replaced with the destination host and port of the “main” ssh, i.e. the ones you typed or implied on the command line (box1).
If you don’t have a nice organized corporate naming scheme, or even DNS at all, you can hardcode the values:
# 18.104.22.168 is the bastion host Host box2.internal.corplan ProxyCommand ssh firstname.lastname@example.org nc 10.0.1.102 22 Host box3.internal.corplan Username appuser ProxyCommand ssh email@example.com nc 10.0.1.103 22
Alternately, combine them:
Host bastion.corpext HostName 22.214.171.124 User myuser Host box2.internal.corplan ProxyCommand ssh bastion.corpext nc 10.0.1.102 22 Host box3.internal.corplan ProxyCommand ssh bastion.corpext nc 10.0.1.103 22
Finally, I used
nc (netcat) to illustrate the example, but it turns out that the ssh command has the functionality built in (as
-W), removing the need to have netcat installed (
-T tells it not to allocate a pty):
Host box2.internal.corplan ProxyCommand ssh firstname.lastname@example.org -T -W 10.0.1.10:22
The beauty of setting up key-based SSH and configuring your hosts in your ssh client config file is that then commands such as:
rsync -avP ./localdirectory/ otherhost:/path/to/dest/
..will “just work”, even if the machine is behind a bastion host, or needs a special SSH port, or a different username, or even if it’s accessed via tor. You no longer need to think about the specifics of each ssh host (other than the hostname), it just lives in your config file.
This also allows you to use the ssh/scp support in your local editor (vim does this, for example) to edit files on remote machines (in a local editor without keyboard lag) that might be a pain in the ass to ssh into due to being behind firewalls or bastion hosts, or on weird ports. Put the specifics in the config file, then it’s as simple as
vim scp://internalbox1.example.com//etc/samba/smb.conf (two slashes between hostname and absolute path for vim’s scp support, mind you).
I like to install
tor on boxes I administrate, and set up a hidden service running on them for ssh, because then I can ssh into them (albeit slowly) even if they have all inbound ports firewalled, or are behind NAT, or whatever—no port forwarding required.
Install tor on the server, add the following two lines to
/etc/tor/torrc, restart tor, and now you have a hidden service address for that system:
apt update && apt install -y tor cat >> /etc/tor/torrc <<EOF HiddenServiceDir /var/lib/tor/my-ssh/ HiddenServicePort 22 127.0.0.1:22 EOF service tor restart cat /var/lib/tor/my-ssh/hostname
If you don’t want the ssh service to be reachable even from the lan/wan (only via the hidden service), add a
ListenAddress 127.0.0.1 to
/etc/ssh/sshd_config and bounce sshd.
For the following to work, you have to be running tor on your local machine too (which provides a SOCKS5 proxy at 127.0.0.1:9050).
Two parts are required:
Host *.tor ProxyCommand nc -x 127.0.0.1:9050 %h %p
Then, for each host:
Host zeus.tor User myuser Hostname ldlktrhkwerjhtkexample.onion
Then, I can just
ssh zeus.tor and it will match the
*.tor to use netcat to talk to the local SOCKS proxy provided by the tor daemon on localhost to connect to the
.onion, and then it will pick up the
.onion hostname and username to use for that specific box from the full host string match
I actually use my ssh config file as my master record of the onion hostnames of my machines. (This is one reason why syncing with
syncthing is vastly preferred to using a file syncing service that gives third parties access to your files. I would prefer that nobody know what hidden services I am interested in or are associated with me, for privacy’s sake.)
Ever want to access a machine behind several NATs/firewalls from a publicly-acessible port? You can use this for access to SSH, or any other service running on the ssh client machine, like a development webserver. First, set up unattended key authentication between the target machine (behind the firewall) to the public machine that is reachable, probably by generating an ssh key without a password on the target machine and creating an unprivileged user on the public machine and adding that public key to
~/.ssh/authorized_keys for that unprivileged user. The public machine also needs to have
GatewayPorts yes in its
/etc/ssh/sshd_config (which is not the default), so it requires a little configuration change to get working.
Then, set the following command to run continuously (via
ssh -T -R 0.0.0.0:11022:127.0.0.1:22 user@publichost
In the above example, this would make port 11022 on the public host (externally available) connect to the target machine’s ssh port (22) as long as the ssh command is running. This is a quick hack to publish a service behind firewalls or on your local workstation accessible, along the lines of what ngrok does, but by re-using a server you already have.
To expose a local development webserver:
ssh -T -R 0.0.0.0:80:127.0.0.1:8080 root@publichost
(You need to use root to bind to ports under 1024, such as 80.)
Nicholas at hackernotes.io has more information on the technique, including a way to restrict this type of remote port binding to a specific user.
Thanks to HN user
roryrjb, Peter Fischer, and James Abbatiello for all submitting bug reports for this post, all of which have been incorporated in small edits. This just once again illustrates that the best way to get a list of thorough errors and required corrections is to speak authoritatively about something in public. :D
Think I’m right? Think I’m totally wrong? Find a bug in a snippet? Complaints, suggestions, and praise to email@example.com.