Black Magic – Demystifying the Command Line Interface


In a previous post, I wrote about how I migrated to Linux after many years running Windows. For most computer users with longstanding Windows experience, one of the biggest gripes against Linux is that it is not user friendly. For many such people, the mentality is based on hearsay, but for some, it is based on actual experience: What do you mean sudo?


Few things strike fear into the hearts of modern Windows users like the CLI, and it’s easy to see why. Microsoft has come a long way with its flagship product, the Windows OS. Each new version has endeavored to give the user a more pleasant experience than the last. It’s primarily the reason for the product’s massive popularity, in spite of its price, security vulnerabilities and other flaws. The Windows desktop user has become accustomed to the easy life. Pointing and clicking is the system’s lingua franca. Application software developers have not “helped” the situation. Nearly every useful program, however cryptic, has a Windows GUI provided either by its vendor or by a third party in an effort to make it more accessible. These advances have gradually led to the misguided notion that the CLI is an extinct creature from the gruesome past of  computer evolution.

Pretending that the CLI does not exist is perfectly harmless for the average computer user. I would even go as far as recommending it. However, any power user or software developer who avoids the CLI is doing himself a huge disservice. I should know, I did for the longest time. The CLI opens up possibilities that no GUI could ever provide. It lets you perform long and tedious tasks quickly. It lets you automate repetitive tasks. It lets you work the same way across different flavors of Unix. It lets you get really intimate with the machine.


After running Linux as my primary OS for quite a while now, I thought I should share my experience and the lessons that I have learnt growing familiar with the CLI.

Linux is the opposite of Windows in many ways. While Windows prides itself in its user friendliness and accessibility, Linux is unapologetically geeky, preferring instead to be viewed as a lean and efficient powerhouse whose singular mission in life is to get things done. To be sure, many projects have evolved over the years with the goal of making Linux a more accessible, general purpose operating system. The most prominent example is without a doubt the ever so popular Ubuntu Linux by Canonical Ltd. Yet Linux can never quite seem to shed it’s heritage as a programmer’s plaything, and as with Windows, this culture pervades the entire ecosystem. There are many Linux programs with which interaction is only possible through the CLI or clunky, halfhearted graphical user interfaces.

A new Linux power user or software developer quickly discovers that unlike on Windows, there is no wishing away the CLI on Linux. The usual reaction is usually one of disbelief, even indignation. But eventually the cold, hard reality settles. Some take off, whimpering and kicking their way back to Windows. A few brave ones stay. If you  are staying, here are six tips to make your CLI journey less grueling and more fulfilling.

1. Do not attempt to memorize commands

Besides cd and ls, there aren’t many other commands that you should deliberately try to memorize. You should let your subconscious handle everything else. I find it does a pretty decent job of helping you remember the ones you use most. All the rest can always be easily found, even without any external reference. I will show you how, shortly.

2. Understand the anatomy of commands

Some commands appear cryptically long and impossible to remember. They are, so do not try to remember them. But they can be understood. Shell programs are written to do one thing and do it well. They read from standard input (keyboard), perform some processing and write to standard output (screen). Optionally, they have options, which in turn may take arguments. To achieve something more complex, commands may be chained together using pipes(|). Each subsequent command then gets its input from the output of the one preceding it. Input and output may also be redirected to other places besides standard input and standard output, to files for instance, using the greater than and less than characters (<, > and >>). Here’s a simple example:

echo cli is fun | cut -c1-3 >> cli.txt

In this example, we use 2 programs echo and cut. The echo program accepts input from the keyboard i.e. the line “cli is fun” and outputs the same to standard output. This output becomes the input of the the cut command. The cut command has one short option specified, -c, which may also be expressed as the long option –characters. For most modern programs, short options are prefixed with a single hyphen and long options are prefixed with double hyphens. Prefixing a long option with a single hyphen leads to expansion, so that cmd -option expands to cmd -o -p -t -i -o -n -s. The result is 7 different short options for the cmd command. In this case, the -c option takes an argument of the form N-M, meaning cut from the Nth character to the Mth character. The output of the cut command therefore becomes just “cli” and is appended to a file named cli.txt by way of output redirection.

Nearly all CLI commands conform to the above pattern. Some commands may require that you first specify a prompt i.e. a word that gives context to the command. For example, all Git commands such as add, commit and pull must be preceded by the prompt git, e.g. git add, git commit or git pull.

3. Know how to find help

As I already mentioned, you should not try to memorize CLI commands. Many CLI programs ship with detailed manual pages that explain what they do and how they should be used. The CLI provides some very handy programs that let you easily get help from right within the terminal. In my opinion, these are the first commands any CLI newcomer should learn.

type: tells you the type of a given command

whatis: gives you a short description of what a given command does

man: lets you access the entire manual page for a given command

apropos: searches names and descriptions of manual pages

help: display help for shell builtins

4. Use aliases

Much of the power of the CLI comes from the fact that simple commands can be chained together to do more complex things. After using the CLI for a while, everyone finds there are commands that they type often. Such commands, when long or difficult to remember, make good candidates for aliasing. Aliasing lets you invoke a command or chain of commands by typing a shorter, more memorable name. For example, to print my wireless LAN IP address I use:

ifconfig wlan | grep inet | grep -v inet6 | awk ‘{print $2}’

This is obviously unwieldy and unnecessary to memorize, so I have it aliased to a much shorter  name: ipad. I also have aliases for Git, Maven, SSH and many other commands that I run frequently.

5. Create functions

Sometimes it is the case that the set of commands you need are executed one by one rather than chained together and therefore cannot be aliased. Not a problem. Such commands can be defined inside a function and then the name of the function becomes the means by which to invoke them all at once. Shell functions are easy to write, even for people without much programming experience.

6. Write shell scripts

Tasks requiring more complex logic are best implemented in shell scripts. Shell scripts are fully-fledged computer programs designed to run in the shell. They can contain control structures and break down big tasks into functions that call each other to implement very complex logic. At the very basic level however, shell scripting is just yet another way of assembling simple programs in interesting ways to solve a big problem.

There you have it. Go on and have fun with the CLI. As a bonus, here are link to my favorite website for learning the Linux shell.

Related articles

Why Should I Care? – Making Education Count


This isn’t exactly about software development. It’s about learning in general.

Do you ever get the sense that an awful lot of what they teach you in school doesn’t really make sense? Like you don’t see the point of it?

Who knows why they learnt calculus? Or types of rocks? A paltry few, I bet you. And yet there surely must be rhyme and reason to these things. Only we’ve been taught to never wonder what it might be.


Take pi, for instance. 22 out of 7, they told us. Approximately 3.142. Ahem!

So what? And how even? Who says the teacher didn’t just pull that number out of the thin air. And why should we care, anyway?

Ah, I see now, π x d. The circumference of a circle. That’s useful. Quite useful.

But wait a minute, it still doesn’t say what pi really is or where pi comes from. Pi, you can’t be that mysterious now, can you?

Or maybe we don’t really need to know. Doesn’t sound right.

For one thing we might forget the value of pi when we most need it. But couldn’t we always look it up in the textbook? Or ask Uncle Google? He knows everything, after all, doesn’t he?

No reason to worry.


Unless of course there is an earthquake and all the textbooks get buried under the rubble, along with your computer and your math teacher, God forbid – and you survive, somehow. But maybe then π, or the circumference of anything, will be the least of your worries.

Strong case for never needing to ruminate too much about this pi guy, huh?


Which is tragic, because pi is such a profound idea. Really, it is. Think about the guy who might have discovered pi. In those days they would probably place a string around a circle to workout it’s circumference. And he placed his string on one circle – and measured. Then on another – and measured. Then on yet another – and measured.

A pattern. He saw a pattern. A relationship.

For every circle of circumference c and diameter d, there existed an interesting relationship between the two. Aha, if you divided c by d for any circle, you got the same magic number! 22/7. 3.142. Pi.

Who needs the string anymore?


But it’s more profound than that. Think about the many lessons this teaches. It demonstrates how you could generalize through experimentation. It also introduces the idea of ratios between quantities and why they are important. Not to mention that it virtually eliminates any need to ever memorize the value of pi. All these skills are very transferable to other problem domains.

Yet your teacher likely didn’t teach you any of this, or even encouraged you to wonder why it is. And many more concepts we fail to appreciate fully.

Shallow. Hollow. Regurgitation of knowledge. Chewing cud. Sheepish, even.

Any wonder then that school ends up creating such poor problem solvers? So much content, yet not the foggiest inkling how it fits in the grand scheme of things. Not the flimsiest idea why we should care.

This thinking, this question, why should I care, I think, should help us all make better teachers and even better students of everything, software programming included. Next time your lecturer introduces a new course, ask him what really is the point of it and why you should care at all. Maybe you shouldn’t. Just saying.

5 Functions of Programming Jokes


There are 10 kinds of people in the world: those who understand binary and those who don’t. This is one of my favorite programming jokes. I’ve always enjoyed good jokes. You probably have, too. But did you know that besides tickling your funny bone, programming jokes can also provide a lot of practical value? In this post, I will share with you 5 uses I have found for programming humor.

1. Pique your interest

Every once in a while you come across an incomprehensible programming joke. Sometimes it’s about something you are familiar with, sometimes it is about something you’ve heard about but have no experience with. Sometimes, it is about something completely new. Regardless, nobody likes to miss a joke. So you’ll normally fire up your search engine in your quest to get it. And that leads to one article, then another, then another. In the end, you are so much the wiser not only about the joke but also about its subject in general. If it fascinates you enough, you might make a note to read some more later on – maybe even code something up as a result!

2. Evaluate your skill level

This might sound far fetched, but the proportion of programming jokes that you actually understand does say something about your skill and experience as a programmer. The more jokes you don’t get the less skilled and experienced you likely are. And that’s okay, but it should also serve as a challenge that there’s so much more out there that you need to learn and experience. If you find that you understand nearly every joke you are reading, broaden your humor catchment. Maybe you are just looking at jokes from the one technical area that you are really good at. How about you see what jokes they have for Unix, if you are a Windows developer, for example?

3. Learn a new technology

Maybe you want to start learning C, or Java. And yes of course, you want to start by looking at how they write their classic “Hello world” program – it’s often an excellent way to get a “feel” of any new programming language. But jokes are a great way too. If you already have a strong programming background, a lot of the language-specific jokes will easily make sense to you. They’ll also tend to humorously highlight the nuances and idiosyncrasies of the language – which is just what you need to know before latching onto a new technology.

4. Discover new tricks

Many jokes derive from some of the more obscure elements of a programming language, style or paradigm. So even though they might relate to a technology you are familiar with, they typically raise interesting perspectives from which you may examine the same things. This effectively leads to new knowledge and renewed interest in the possibilities of the subject at hand. “You mean you can do that?”.

5. Unwind

Every programmer will tell you that they consider their career a highly rewarding labor of love. But even passion and enthusiasm run out – eventually, or fatigue just gets the better of you. Which is just as well because we need an excuse for that late afternoon beer. Sometimes, though, it’s 10 in the morning and you can’t seem to get your head to think. A trip to the bar at that hour has the very real potential to be turn the day into your last one at your company. Yet you need an instant escape route from the drudgery of algorithms, data structures and overly complicated programmer tools. No problem. Just head over to your favorite programming jokes website and a laugh your sorrows away.

What other benefits do you derive from programming humor?

PS: This is probably true of all kinds of “professional humor”. As a bonus, here’s a link to some of my most favorite programming jokes on the internet.

Naming Surrogate Primary Keys


It’s one of those questions of style that aren’t always possible to be objective about. Often they come down to personal preference. But we still like to think we have decent enough reasons for our choices. So we try to justify them, often to ourselves but occasionally to everyone else. Sometimes we just darn call out anyone who disagrees and tell them we think they are foolish and uneducated. And a flame war ensues. I am not trying to start one now.

Primary key politics

I’ve designed a good number of databases in my time and, needless to say, confronted some of the more divisive primary key politics on many occasions. Surrogate or natural? Integer or UUID? Lookup table or simple column? Composites are the work of the devil…

But today I want to talk about something more benign: How to name a primary key. And I have to tell you from the onset that I have mostly been firmly planted in the surrogate camp from the day I designed my first production database. We have to get that out of the way otherwise nothing else in this post will make sense.

I could tell you a lot about why I love surrogates, but that’s not the goal of my post today. I have to tell you though, that I like how they jump out and grab your attention the moment you open a table. Because often they have beautiful, predictable and intuitive names: id, item_id, uuid, item_uuid… Placed at the beginning of a table, columns like these are like a skimpily dressed, smartphone-wagging young woman sashaying into a restaurant. Vain she may be, but she is impossible to miss. Natural keys, on the other hand, are not unlike the nondescript damsel curled up in a corner, all dignified and covered up and reading a copy of the Business Daily. Sure she may be full of substance, inner beauty and all that is good and holy, but it takes a good amount of prowling to know that she’s even there.

To prefix or not to prefix

But if surrogate key names are so beautiful, predictable and intuitive, what is this post for? Well, it turns out that the choice between and item.item_id isn’t immune to the emotional flare-ups and touchiness of software developers and data modelling specialists. I have personally always preferred that the primary key be a single short name such as rather than the uselessly repetitive and verbose table name plus short name variety of the item.item_id kind. However, I do use this later notation for foreign keys, where I think it make sense since the referenced table is not otherwise obvious.

In my experience, those who favor the shorter primary key approach are the minority. To be sure, proponents of the longer primary key nomenclature generally apply it also for their foreign keys, creating a welcome point of convergence for both camps. They argue that they end up with the same corresponding primary and foreign key names, which makes join queries a lot more joy to read and write. But I say what, pray tell, do you gain by writing item.item_id = stock.item_id that you don’t by writing = stock.item_id?

Some will say that you don’t always include the table name when mentioning a column in an SQL query. I say that is bad practice and you really should be including the name of the table. For really huge queries, the last thing a reader wants is to have to sift through the inconsistency of some column names being table-prefixed while other column names (presumably the unambiguous ones) are left un-prefixed. Many times the reader of your queries will scarcely be familiar with your database and will appreciate the ability to instantly recognize a column as belonging to a particular table.

How do you name your (surrogate) primary keys?


Becoming a Software Developer


Normally when people read or hear a fascinating story about anything, they tend to imagine themselves as an actor in the unfolding drama. That’s how it was for me when I first read Ben Carson‘s Think Big a few weeks before I joined high school. When our physics teacher came around for our first lesson and asked us what we would like to be when we grew up, I shot up immediately and proclaimed, rather importantly, that I wanted to be a neurosurgeon. Needless to say, I had the limitless pleasure of explaining to my dismayed classmates that the job of a neurosurgeon is to open up people’s heads and fix any problems he might find.

Even today, whenever I read a great book, watch an intriguing documentary or just listen to someone who really loves their job, I can’t help visualizing myself in a lead role in their story. Which really is the aim of any great story teller – to paint a picture so vivid that his listener sees himself in it. For most people (and most stories), this fantasy wears out soon after the end of the tale. When you read a president’s autobiography for instance, you imagine how it must be to be president. But however much you enjoy the book, you typically don’t embark on preparing for next general election on the basis of that experience alone. When you listen to the operating theater intrigues of your surgeon friend, you don’t send your application for admission into med school the next day.

Lingering fantasy

It appears though that there are certain kinds of stories that invite a fantasy that lingers far longer than usual – or at least that’s how it seems for some people. I guess it affects activities that are viewed more as hobbies rather than real professional undertakings. Things like photography, animation, rally driving and… wait for it… software development. Somehow these things attract the notion that it really can’t be that hard. And understandably so. It is certainly easier to take a picture, animate a stick figure, attempt a heel-and-toe or write a “Hello world” program than it is to perform a circumcision or run a small country. The sheer accessibility of these vocations make them seem deceptively straightforward compared to more dangerous or people-driven initiatives. The problem, I think, is a lack of appreciation of the wide chasm between hobbyist and professional pursuits of these careers.

I have met a good number of people with a small amount of training in computer programming – but often little or no apparent interest – who seem to think they can just cross over from whatever they are doing and make a career out of writing software. Such people will usually have written their last line of code a few years before, have zero or very rudimentary projects under their name and fond memories of how nice it was to never have to remember semicolons in Visual Basic 6.0. They will usually have struggled with some other job(s) for a while and then, by various means, come under the impression that software development is a very lucrative career. At which point a light bulb comes on in their head and they instantly remember their computer science roots and voila! Their fantasy journey begins.

Self taught developer

As a largely self-taught developer who does not trace his roots to an IT or CS degree, I will be the last one to say that software development is rocket science. Far from it. Anybody with the passion and a decent level of analytical skills will get by just fine. I am also not saying it is not a good hobby to pick up. I would recommend it to anyone who is genuinely interested. What I am saying is you can’t just come from being a credit control officer and hope to polish up your first year or diploma C lessons before applying for a software development job. It doesn’t work like that.

Professional software development is a lot more than memorizing compile commands and putting your curly braces in the right places. It is more than the number of programming languages with which you are familiar. It is about identifying people’s problems and solving them. It is about learning to talk to users who have no clue what they want but are especially adept at detecting what they don’t want the very moment they see it. It is about project management. It is about people and relationships management. It is about inventing tomorrow today. And you don’t get very good at these things if you don’t enjoy doing them. At least a good number of them anyway. Because writing software is as much a science as it is an art. And you know what Doug Coupland said about creativity? It is the only other thing, besides competence and sexual arousal, that you can never fake.

Best wishes

So go on and dream about becoming a professional software developer. And you have my best wishes. Just understand that it’s going to take you passion, time and hard work. And because the industry evolves so much so fast, you will need the willingness and tenacity to frequently update your skills. One more thing, in this industry, evidence of projects you have done carries far more weight than the most outstanding school grades you can muster. Unless your interviewer is not too smart – which, I must tell you, is not entirely inconceivable.

My Rugby Experience: A Lesson for Programmers


Where I went to high school, rugby was a big thing. You looked tough and impressed the girls if you played the game. You didn’t need to be exceptional. Showing up at the dinning hall in soiled shorts, grazed elbows and a limp – which didn’t even need to be genuine – often did the trick. In those pubescent days, you didn’t aim for a kiss. That was far too high. Even a hug was considered a bonus. Just a gaze that lingered a second longer than usual was good enough. If you were lucky, they poked each other’s ribs, pointed at you and whispered – or giggled. And you felt like a warrior.

There were many of us like that. We played the game for the glory. We never saw it for what it was, as much a game of wit and strategy as it was of brawn. We saw it as a contest in brute force. We hit hard even when there were less violent and more productive alternatives. We delighted in our injuries and proudly wore them as emblems of valor. When we didn’t get hurt, we faked it.

Then there were those who knew what they were doing. They played because they honestly enjoyed doing so. And they knew the real goal. To win. Like us, they would get hurt every once in a while. That was impossible to avoid. But unlike us, they never celebrated injury or took any pride in the acquisition of it. They did everything they could to avoid it, and certainly never faked it. They knew that just a point more than the opposing team was enough reward. It brought the glory and the recognition. You didn’t need a broken nose or a sprained ankle. In fact, those were liabilities that got in the way of winning.

It is many years now since I last feigned a limp. Yet in my programming career, I have witnessed something curiously similar to what happened on my high school rugby field. There are people who write code for the glory and those who write it out of enjoyment. Like in rugby, the glory-seekers are inept and disinterested in the inner workings of their art. They take pride in sleepless nights spent chasing after shallow bugs that could have been avoided in the first place. They delight in the unnecessary complexity of their classes and methods. When they pick up a design pattern, they use it to the point of abuse, often employing it to complicate what could otherwise be achieved by clearer means. They are happy when they explain a piece of code to you and you can’t get it. It gives them a high. For them, complication equals genius.

And then those who write code for the love of it. They are smart and solution oriented. Often they have so much useful work to do they could scarcely afford a minute on a needless bug. They aim to get it right the first time. From seemingly mundane observances like variable naming and code formatting to lofty ideals like unit testing and defensive programming, they are meticulous to a fault. They take pride not in the obfuscation of code but in the elegance of it. When they do obfuscate their code, it is on purpose, and they are jolly good at it. They invest time and effort in studying their art. They strive to understand every rule or suggestion of good practice. Because then that education becomes a permanent source of illumination rather than a temporary euphoria of illusory competence. Above all, they keep their eyes on the goal – software that makes its users happy, is maintainable and scales well. That is how they judge their success.

Which type of programmer are you?

Migrating from Windows to Linux: No Regrets


For about 2 years I had both Linux and Windows installed on my laptop. But having cut my teeth as a user and later as developer on a Windows environment, it is not surprising that I only booted the Linux installation on very few occasions – mostly when I was bored – and never really did anything useful with it. While I appreciated what looked like a decent desktop environment, I still preferred the familiar way of getting things done on Windows and the fact that I rarely ever needed to open the command prompt. In addition to that, about half of all the programming I did then was based on Microsoft’s tools so there wasn’t much professional motivation to pay attention to Linux.

Then I changed jobs and joined an organization where much of the development work going on was in Java and none of my two new teammates was writing it on a Windows platform. One guy was on Ubuntu Linux and the other one on MAC OS. More importantly, all software releases by the organization were made on VirtualBox appliances running Ubuntu. I started to worry that I would now not even be able to provide proper system support to users of our own software because it ran on an OS I was scarcely familiar with.

Making the Move

So I started thinking seriously about completely migrating to Linux. Fortunately, I got a new laptop around the same time and it shipped with Windows 8. My cumulative experience with Windows 8 to date is about 10 minutes. The 10 minutes between when I powered up the new laptop and when I decided the new desktop looked so different from Windows 7 that it felt as if I was learning to use a computer all over again. Well, almost. That was my watershed. If I was going to be spending time figuring out how to use a different environment, that environment might as well be Ubuntu’s Unity and of course, Linux in general. At that very moment, I downloaded Ubuntu 13.04 which, incidentally, was only a few hours old that day, and set it up as the only OS on my laptop. Phew! Besides having to disable the UEFI and tinkering a bit with ALSA to get sound coming out of the speakers, the installation was pretty straightforward and in about an hour, I had a working Linux system running on 4 cores and all of 16GB of RAM. How cool!

Still, there was the small matter of my unfamiliarity with the new OS. However, I felt that not having a familiar Windows OS to fall back to when things got tough was just the motivation I needed to get going. That’s how it turned out. After a couple of weeks of mucking about with terminal commands, a strange directory structure and a few other unfamiliar nuances of Linux, I felt even more competent and in control using it than I ever felt using Windows. I guess the pervasive sense of unfamiliarity and incompetence got me to deliberately study things in Linux that I only came to know by chance on Windows.

Exploring the Freedom

Within a few months of actively using Linux on a regular basis and also reading quite a bit of literature online, I started to marvel at the diversity of ways in which the OS was served up and in which you could consume it. For me, my installation of Ubuntu had been a lot like how one joins a religion early on in their lives. You get born into a Christian family and you become Christian. Then later on you grow up and form opinions of your own and you start to realize there are alternatives out there. And you start to become curious about them and to learn about them. Maybe you gather enough conviction and jump ship. Maybe you just change how you consume your original religion. Maybe you decide you don’t want to belong to any mainstream religion and you develop your own belief system… that would be Linux from Scratch🙂

It turned out that the first casualty of my new-found freedom was the Unity desktop. KDE with their Plasma Desktop impressed me greatly the first time I tried it. Maybe it is the resemblance to Windows 7. You can never underestimate the power of familiarity. But more importantly, I felt a certain dislike for Unity. For instance, I hated the way menu bars docked up at the top, especially when I didn’t have my windows maximized. I also found that the KDE desktop appeared sleeker, elegant and more modern. With loads of RAM to spare, I didn’t worry about the fancy graphics impacting performance.

Lately I have been looking at the many distros available. I have a couple of them running on VMs on my system. I like the touted stability of Debian and the academic promise of Arch. Further down the road, as my competence increases, I want to try Linux from Scratch. Linux from Scratch is not really a distro. It is a book about how to make your own distro – just the way you want it. After all, it’s this kind of freedom that makes Linux and open source in general such a powerful and liberating idea.

As for now, I still pledge allegiance to my trusty Ubuntu installation… Kubuntu really, since I installed KDE. But I don’t think it will be like that for a long time. In a few weeks, it’s probably going to be Debian or OpenSUSE. Maybe it will be Debian then OpenSUSE. Then maybe Arch, and ultimately Linux from Scratch – the Mecca of the Linux enthusiast. But one thing is for sure, I will be using Linux for a long time to come.