Skip to main content

ifconfig, ip, json and jq

JSON Art

Reading articles on Linux Weekly News (LWN) I usually also read the comments as it is not uncommon for people to comment who are actually involved in the topic of discussion. So it is not at all uncommon that the comments significantly contribute to my understanding of the content.

In this vein the discussion on LWN picked my interest as it meandered into why still so many people cling to the ifconfig tool rather than switch to the ip tool meant to replace it for a long time. I was surprised that the first one seems to be obsolete now for over two decades. It appears to me that my attempts to switch to the new tool started only a few years ago, but hey, time flies. Of course such a discussion is quickly dominated by different personal preferences, but one thing struck me and coincides with a lot of my recent thinking - the fact that ip can be told to produce its output in JSON format.

dzu@elementary-vm:~$ ip -4 -j address
[{
        "ifindex": 1,
        "ifname": "lo",
        "flags": ["LOOPBACK","UP","LOWER_UP"],
        "mtu": 65536,
        "qdisc": "noqueue",
        "operstate": "UNKNOWN",
        "group": "default",
        "txqlen": 1000,
        "addr_info": [{
                "family": "inet",
                "local": "127.0.0.1",
                "prefixlen": 8,
                "scope": "host",
                "label": "lo",
                "valid_life_time": 4294967295,
                "preferred_life_time": 4294967295
            }]
    },{
        "ifindex": 2,
        "ifname": "ens3",
        "flags": ["BROADCAST","MULTICAST","UP","LOWER_UP"],
        "mtu": 1500,
        "qdisc": "fq_codel",
        "operstate": "UP",
        "group": "default",
        "txqlen": 1000,
        "addr_info": [{
                "family": "inet",
                "local": "192.168.122.53",
                "prefixlen": 24,
                "broadcast": "192.168.122.255",
                "scope": "global",
                "dynamic": true,
                "noprefixroute": true,
                "label": "ens3",
                "valid_life_time": 3570,
                "preferred_life_time": 3570
            }]
    }
]
dzu@elementary-vm:~$

Over the last few years I started to look for ways on how to extend the "classic" Unix pipelines with a concept to pass around and work on structured data instead of the "character streams". The classic pipeline concept works nicely in a lot of circumstances, but also has real limits. After all, one of the original use cases of the Unix operating systems was to support document generation inside AT&T. But once the data becomes more complex, the scripts quickly become more error prone and tend to have real problems outside the "best case". Some irregular data or unexpected input quickly throws off the whole processing. Robust pipelines require the capability to work on real data structures instead of second guessing the structure with the help of field delimiters and regular expressions.

To my very surprise I learned that the Microsoft Windows ecosystem has such a concept in the form of "PowerShell Objects" for quite some time now. Is it possible that Microsoft, with its ugly DOS shell heritage, not only caught up but significantly improved on the beauty of the Unix concepts? I honestly have to admit that I am jealous of having such a concept available, but to my personal relief trying some PowerShell coding myself it quickly became apparent that the implementation falls significantly short of what I would call elegant.

So looking for basic established concepts it became more and more clear that JSON would be a very good candidate for the data representation. In combination the jq tool that I already wrote about this is a very powerful toolkit for the command line. However, in order to integrate the classic Unix tools we have to solve the problems of transforming character streams into JSON and on the other end parse JSON into e.g. shell variables.

A pragmatic and easy solution for the first issue is exactly what we see here, adding a "--json" option to the tool itself. Of course this puts the implementation burden onto every utility, but it’s easy to do and so let’s look at a few examples of what we can do with this combination. With ip, it becomes trivial to filter just the network interface names in a way that clearly documents what we intend to do:

dzu@elementary-vm:~$ ip -4 -j a | jq -r '.[].ifname'
lo
ens3
dzu@elementary-vm:~$

In a "classic" pipeline this would usually have been done by looking for regular expressions or specific fields, in the worst case by punching out individual character positions. The jq script in comparison clearly tells us what it tries to do.

Selecting different parts of the output and transforming the input is also easy. For example, let’s pair the interface names with the network addresses:

dzu@elementary-vm:~$ ip -4 -j a | jq '.[] | {ifname, local: .addr_info[].local}'
{
  "ifname": "lo",
  "local": "127.0.0.1"
}
{
  "ifname": "ens3",
  "local": "192.168.122.53"
}
dzu@elementary-vm:~$

The more complex jq program not only filters the input but builds up new data objects on the fly by "pulling up" data from deeper parts of the hierarchy.

Also note that jq colors the JSON output by default in a way that makes parsing much easier for me.

Interfaces can have multiple addresses, so it is interesting to collect them all. The following example filters the output for a specific network interface and reports IPv4 and IPv6 addresses assigned to it:

dzu@elementary-vm:~$ ip -j a | jq -r '.[] | select(.ifname == "ens3") | .addr_info[].local'
192.168.122.53
fe80::b068:aff3:c6f9:8b70
dzu@elementary-vm:~$

I hope this gives an idea on how powerful it is to be able to specify the queries with real field names rather than specifying field numbers, column numbers or regular expressions in traditional Unix pipelines.

But as can be seen by using the jq option "-r" to output the results for easy further consumption, we do not yet have an elegant solution on how to process the output. In simple cases the "-r" strips the quotes and so we can easily use our classic toolkit for further processing. So we forcefully remove the meta data again for the JSON unaware Unix world. It would of course be much nicer if we could do kind of an "de-structuring bind" from JSON to shell variables, but this is the topic of another upcoming blog post.

For today, let me close on a different note that I also learned from that LWN discussion. I learned that ip can already produce colored output which, again, I find much easier to parse than the standard output. I think I will add an alias from ip to ip -c into my shell profiles.

dzu@elementary-vm:~$ ip -c -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 192.168.122.53/24 brd 192.168.122.255 scope global dynamic noprefixroute ens3
       valid_lft 2630sec preferred_lft 2630sec
dzu@elementary-vm:~$ 

Comments

Comments powered by Disqus