Text processing is at the heart of Unix. From pipes to the /proc subsystem, the "everything is a file" philosophy pervades the operating system and all of the tools built for it. Because of this, getting comfortable with text-processing is one of the most important skills for an aspiring Linux system administrator, or even any power user, and awk is one of the most powerful text-processing tools available outside general-purpose programming languages.
The simplest awk task is selecting fields from stdin; if you never learn any more about awk than this, you'll still have at your disposal an extremely useful tool.
By default, awk separates input lines by whitespace. If you'd like to select the first field from input, you just need to tell awk to print out $1:
(Yes, the curly-brace syntax is a little weird, but I promise that's about as weird as it gets in this lesson.)
Can you guess how you'd select the second, third, or fourth fields? That's right, with $2, $3, and $4, respectively.
Often when text munging, you need to create a specific format of data, and that covers more than just a single word. The good news is that awk makes it easy to print multiple fields, or even include static strings:
foo: three | bar: one
Ok, but what if your input isn't separated by whitespace? Just pass awk the '-F' flag with your separator:
Occasionally, you may find yourself working with data with a varied number of fields, and you just know you want the *last* one. awk prepopulates the $NF variable with the *number of fields*, so you can use it to grab the last element:
You can also do simple math on $NF, in case you need the next-to-last field:
Or even the middle field:
While this is all very useful, you can get away with forcing sed, cut, and grep into a form to get these results, as well (albeit with a lot more work).
So, I'll leave you with one last introductory feature of awk, maintaining state across lines.
(The END indicates that we should only perform the following block **after** we finish processing every line.)
The case where I've used this is to sum up bytes from web server request logs. Imagine we have an access log that looks like this:
Jul 23 18:57:12 httpd: "GET /foo/bar HTTP/1.1" 200 344 Jul 23 18:57:13 httpd: "GET / HTTP/1.1" 200 9300 Jul 23 19:01:27 httpd: "GET / HTTP/1.1" 200 9300 Jul 23 19:01:55 httpd: "GET /foo/baz HTTP/1.1" 200 6401 Jul 23 19:02:31 httpd: "GET /foo/baz?page=2 HTTP/1.1" 200 6312
We know the last field is the number of bytes of the response. We've already learned how to extract them using print and $NF:
344 9300 9300 6401 6312
And so we can sum into a variable to gather the total number of bytes our webserver has served to clients during the timespan of our log:
Subscribe to Xmodulo
Do you want to receive Linux FAQs, detailed tutorials and tips published at Xmodulo? Enter your email address below, and we will deliver our Linux posts straight to your email box, for free. Delivery powered by Google Feedburner.