Skip to content

awk

"pattern-directed scanning and processing language" - man awk

Examples

Some of these require GNU awk.

awk '${print $1}' filename.txt
ps aux | awk '$1 == "root" {print $2}'

Sort a file by line lengths

awk '{print length, $0}' testfile.txt | sort -n

TDL to CSV

awk '{gsub("\t","\",\"",$0); print;}' | sed 's#^#"#;s#$#"#;'

% is the modulus operator, which finds the remainder after an integer divide.

awk 'NR % 2 == 0 { print $1 }'
ls | awk 'NR % 2 == 0 { print $0 }'
ls | awk 'NR % 2 != 0 { print $0 }'
awk '{if (NR%2==0) { print $0 " " prev } else { prev=$0 }}'
awk '{sum += $1} END {print sum}' filename
awk '{sum += $1} END {avg = sum/NR ; printf "Sum:     %s\nAverage: %s\n", sum, avg}' foo.txt

Split file by recurring string

This will create a new file every time the string "SERVER" is found, essentially splitting the file by that string. Concatenating all of the output files would create the original file (potentially adding an extra newline).

awk '/SERVER/{n++}{print >"out" sprintf("%02d", n) ".txt" }' example.txt

Show count of syslog messages per minute

awk -F: {'print $1 `“`:`”` $2'} /var/log/messages |uniq -c

Show count of root logins per minute

awk -F: '/root/{print $1 ":" $2}' /var/log/auth.log |uniq -c
ls -la | awk '$3 ~/[0-9]/{print}'

Show only zfs snapshots whose size is zero

zfs list -t snapshot | awk '$2 == 0'
tcpdump -r ops1prod-syn.cap | sort -k2 | awk '$3 !~ /ztmis.prod/ { print }'

Show 500 errors in a standard apache access log

awk '$9 ~ /5[0-9][0-9]/' www_zoosk_access.log

Show total rss and vsz count for all cronolog processes

ps aux |
  grep -i cronolo[g] |
  awk '{vsz += $5; rss += $6} END {print "vsz total = "vsz ; print "rss total = "rss}'

Get IPv4 address on BSD/OSX

ifconfig | awk '$1 == "inet" && $2 != "127.0.0.1" {print $2}'

Get IPv6 address on BSD/OSX

ifconfig | awk '$1 == "inet6" && $2 !~ "::1|.*lo" {print $2}'
ls -la | awk -F" " '{print $NF}'
ls -la | awk -F" " '{print $(NF - 1)}'

This works by storing the previous line. If the current line matches the regex, the previous line is printed from the stored value.

$ awk '/32 host/ { print previous_line } {previous_line=$0}' /proc/net/fib_trie | column -t | sort -u
|--  10.134.243.137
|--  127.0.0.1
|--  169.50.9.172

Add content to line 1 if there is no match

This adds a yaml document separator to the beginning of all yaml files in the current directory only if it does not already have one.

tempfile=$(mktemp)
for file in ./*.yaml ; do
  awk 'NR == 1 && $0 != "---" {print "---"} {print}' "${file}" > "${tempfile}" \
  && mv "${tempfile}" "${file}"
done
helm template . --set global.baseDomain=foo.com -f /Users/danielh/a/google-environments/prod/cloud/app/config.yaml 2>/dev/null |
awk '/image: / {match($2, /(([^"]*):[^"]*)/, a) ; printf "https://%s %s\n", a[2], a[1] ;}' |
sort -u |
column -t

A less complicated awk form of this that uses other shell commands would be

helm template . --set global.baseDomain=foo.com -f /Users/danielh/a/google-environments/prod/cloud/app/config.yaml 2>/dev/null |
grep 'image: ' |
awk '{print $2}' |
sed 's/"//g' |
sed 's/\(\(.*\):.*\)/https:\/\/\2 \1/' |
sort -u |
column -t

So it really depends on where you want to put your complications, how performant you want to be, and how readable you want it to be. These both produce identical output, but some people find it easier to read shorter commands with simpler syntaxes, which is great for maintainability when performance is not an issue.

https://quay.io/astronomer/ap-alertmanager  quay.io/astronomer/ap-alertmanager:0.23.0
https://quay.io/astronomer/ap-astro-ui      quay.io/astronomer/ap-astro-ui:0.25.4
https://quay.io/astronomer/ap-base          quay.io/astronomer/ap-base:3.14.2
https://quay.io/astronomer/ap-cli-install   quay.io/astronomer/ap-cli-install:0.25.2
...snip...

See Also