Happy Thanksgiving and the key to understanding

I’ve spend the last six months getting deeply into programming again and over and over I find that I learn differently than the way most tutorials take you. So here’s a quick list of new things I’ve learned and the fundamental idea that (at least of me cracked the nut) where there are usual one or two important ideas

Docker: Separation and the Daemon

This is the big one, the tutorial is at once simple and confusing because it is so unclear how it all works. Yes there are containers, but the biggest thing that they do not explain is the docker daemon. if you understand this one idea, then figuring out docker is pretty easy (at least for me).

Like all virtual system, docker has that strange idea that when you type “docker run” things look ike your machine, but actually you have a different file system. It isn’t hard, but basically you live in a parallel universe and you can connect to your base system with a simple “docker run -v /usr/share:/home/rich/share which means in the contain what looks like an incredibly privileged space is actually use your local directory.

The second is the concept of a Dockerfile and docker push/pull. A Dockerfile is like a shell script which configure your container. You check it into to git and when you feed the file to docker build, you create one which you can push. Docker always starts as root, so the main thing to get over is that you have to recreate the all the users in your system

The concept is buried in a docker-in-docker article but the simple point is that on every machine there is a single docker daemon that controls all docker containers. Previously they had a concurrency model, but this is more maintainable. So this explains what is happening with a Dockerfile, when the daemon (a super user kind of thing) gets control, it get a zipped image of the entire directory where the Dockerfile lives and it processes from there as it’s world. It also explains how docker machine works. You can access the daemon by taking to it on a well know named socket if you have the right credentials. When you do a docker pull, it is this one daemon across all the users on the machine that does the caching. And you can connect to that daemon so that a docker container can create containers across the internet, or side-by-side. This is also what allows docker-machine to work, when you type “docker” it adjusts the docker daemon you are talking to go to a different machine


This is one of those programming languages that no one likes but you have to use. It is natural because it is the simplest way to glue together commands.

The main thing to realize about Bash is that anytime you are writing a loop (for i in . ./lib ../lib; do source $i/foo.sh; done), you have probably done something wrong. Bash is all about finding commands that iterate for you and return a stream of bits, so it is about searching and turning procedurals into a stream. The above is rewritten as find . ./lib ../lib -name foo.sh | head -n1 | xargs | source {} which does the same thing

For subtlety, there is one key idea in bash. That is the idea that every command has a return code and this is always what’s driving what happens next. If you have set -e on it will even stop a script is you have even one bad exit codes. Even things like ((i++)) has a value and also a return code. When does this bite you, well in the middle of a long pipe, you can have a bad exit code and the thing crashes, so watch your error codes!

  • Objective C. The main thing is that everything is an interface, so you have to be careful about declaring and them and making sure you use the right one. Not complicated in theory, but man complicated in practice.

  • Python. You can think of it as a simple procedural programming language which is how most uses of it go. Sort of like bash but longer, but the real power is in its string and array functions.