Linux Network Configuration: A Decade Later
In 2014 I wrote about the state of Linux network configuration, lamenting the proliferation of netlink libraries and how most projects hadn’t progressed past shell scripting and iproute2. I concluded that “there is a need for a good netlink library for one of the popular scripting languages.”
A decade later, that library exists. More importantly, the ecosystem has matured enough that every major language has a credible netlink option - and production systems are using them.
To compare them, I’ll use the same example throughout: create a bridge, a network namespace, and a veth pair connecting them.
Bash: iproute2
First, the baseline. This is what most scripts still do - shell out to ip:
#!/bin/bash
# Create a bridge
ip link add br0 type bridge
ip link set br0 up
# Create a veth pair
ip link add veth-host type veth peer name veth-ns
# Create namespace and move one end into it
ip netns add test-ns
ip link set veth-ns netns test-ns
# Add host veth to bridge and configure
ip link set veth-host master br0
ip addr add 10.0.0.1/24 dev veth-host
ip link set veth-host up
# Configure the namespace side
ip -n test-ns link set lo up
ip -n test-ns addr add 10.0.0.2/24 dev veth-ns
ip -n test-ns link set veth-ns up
# List links and addresses in namespace
ip -n test-ns -br -4 addr show
This is concise and readable - arguably more so than any of the library examples below. For one-off scripts or manual configuration, it’s hard to beat.
The problems emerge when you need to go from a declarative configuration file to running state. What if br0 already exists? What if it exists but with different settings? You need to diff the current state against desired state, handle partial failures, and roll back cleanly. That means querying current config, parsing ip output (or using -j for JSON and parsing that), and writing extensive error handling - all in bash. Projects that start this way eventually accumulate thousands of lines of shell and wish they hadn’t.
The netlink libraries give you programmatic access to query and modify state, proper error handling, and the ability to build that declarative-to-imperative layer in a real programming language.
Python: pyroute2
pyroute2 is what I was looking for in 2014. Pure Python, no subprocess calls, no text parsing:
from pyroute2 import IPRoute, NetNS
with IPRoute() as ipr:
# Create a bridge
ipr.link('add', ifname='br0', kind='bridge')
br_idx = ipr.link_lookup(ifname='br0')[0]
ipr.link('set', index=br_idx, state='up')
# Create a veth pair
ipr.link('add', ifname='veth-host', kind='veth', peer='veth-ns')
veth_host_idx = ipr.link_lookup(ifname='veth-host')[0]
veth_ns_idx = ipr.link_lookup(ifname='veth-ns')[0]
# Create namespace and move one end into it
netns = NetNS('test-ns')
ipr.link('set', index=veth_ns_idx, net_ns_fd=netns.netns)
# Add host veth to bridge and configure
ipr.link('set', index=veth_host_idx, master=br_idx)
ipr.addr('add', index=veth_host_idx, address='10.0.0.1', prefixlen=24)
ipr.link('set', index=veth_host_idx, state='up')
# Configure the namespace side
with NetNS('test-ns') as ns:
ns.link('set', index=1, state='up') # lo
veth_idx = ns.link_lookup(ifname='veth-ns')[0]
ns.addr('add', index=veth_idx, address='10.0.0.2', prefixlen=24)
ns.link('set', index=veth_idx, state='up')
# List links and addresses
for link in ns.get_links():
name = link.get_attr('IFLA_IFNAME')
addrs = ns.get_addr(index=link['index'])
for addr in addrs:
print(f"{name}: {addr.get_attr('IFA_ADDRESS')}/{addr['prefixlen']}")
Who uses it: OpenStack Neutron uses pyroute2 for its L2 and L3 agents. When you configure networking for OpenStack VMs, pyroute2 is doing the work underneath.
Go: vishvananda/netlink
vishvananda/netlink originated from Docker’s libcontainer and has become the de facto standard. Over 11,000 GitHub projects depend on it.
package main
import (
"fmt"
"github.com/vishvananda/netlink"
"github.com/vishvananda/netns"
"runtime"
)
func main() {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
// Save original namespace
origns, _ := netns.Get()
defer origns.Close()
defer netns.Set(origns)
// Create a bridge
bridge := &netlink.Bridge{LinkAttrs: netlink.LinkAttrs{Name: "br0"}}
netlink.LinkAdd(bridge)
netlink.LinkSetUp(bridge)
// Create a veth pair
veth := &netlink.Veth{
LinkAttrs: netlink.LinkAttrs{Name: "veth-host"},
PeerName: "veth-ns",
}
netlink.LinkAdd(veth)
// Get veth-ns handle before creating namespace
vethNs, _ := netlink.LinkByName("veth-ns")
// Create namespace (this switches into it)
newns, _ := netns.NewNamed("test-ns")
defer newns.Close()
// Return to host namespace for configuration
netns.Set(origns)
// Move veth-ns into the new namespace
netlink.LinkSetNsFd(vethNs, int(newns))
// Add veth-host to bridge and configure
vethHost, _ := netlink.LinkByName("veth-host")
netlink.LinkSetMaster(vethHost, bridge)
addr1, _ := netlink.ParseAddr("10.0.0.1/24")
netlink.AddrAdd(vethHost, addr1)
netlink.LinkSetUp(vethHost)
// Configure inside namespace
netns.Set(newns)
lo, _ := netlink.LinkByName("lo")
netlink.LinkSetUp(lo)
vethInNs, _ := netlink.LinkByName("veth-ns")
addr2, _ := netlink.ParseAddr("10.0.0.2/24")
netlink.AddrAdd(vethInNs, addr2)
netlink.LinkSetUp(vethInNs)
// List links and addresses
links, _ := netlink.LinkList()
for _, link := range links {
addrs, _ := netlink.AddrList(link, netlink.FAMILY_V4)
for _, addr := range addrs {
fmt.Printf("%s: %s\n", link.Attrs().Name, addr.IPNet)
}
}
}
Who uses it: The Kubernetes networking ecosystem. Flannel, Multus CNI, and the AWS VPC CNI all use it. If you’re running containers on Kubernetes, this library is probably configuring your pod networking.
Rust: rtnetlink
rtnetlink is async-native and has matured significantly. Namespace handling requires the nix crate for setns:
use std::os::fd::{AsFd, AsRawFd};
use futures::stream::TryStreamExt;
use ipnetwork::IpNetwork;
use nix::sched::{setns, CloneFlags};
use rtnetlink::{new_connection, Handle, LinkBridge, LinkUnspec, LinkVeth, NetworkNamespace};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let (connection, handle, _) = new_connection()?;
tokio::spawn(connection);
// Create a bridge
handle.link().add(LinkBridge::new("br0").build()).execute().await?;
let br_idx = get_link_index(&handle, "br0").await?;
handle.link().set(LinkUnspec::new_with_index(br_idx).up().build()).execute().await?;
// Create a veth pair
handle.link().add(LinkVeth::new("veth-host", "veth-ns").build()).execute().await?;
let veth_host_idx = get_link_index(&handle, "veth-host").await?;
let veth_ns_idx = get_link_index(&handle, "veth-ns").await?;
// Create namespace and move veth-ns into it
NetworkNamespace::add("test-ns".to_string()).await?;
let ns_fd = std::fs::File::open("/var/run/netns/test-ns")?;
handle.link().set(
LinkUnspec::new_with_index(veth_ns_idx)
.setns_by_fd(ns_fd.as_raw_fd())
.build()
).execute().await?;
// Add veth-host to bridge and configure
handle.link().set(LinkUnspec::new_with_index(veth_host_idx).controller(br_idx).build()).execute().await?;
let addr: IpNetwork = "10.0.0.1/24".parse()?;
handle.address().add(veth_host_idx, addr.ip(), addr.prefix()).execute().await?;
handle.link().set(LinkUnspec::new_with_index(veth_host_idx).up().build()).execute().await?;
// Configure inside namespace
setns(ns_fd.as_fd(), CloneFlags::CLONE_NEWNET)?;
let (ns_conn, ns_handle, _) = new_connection()?;
tokio::spawn(ns_conn);
let lo_idx = get_link_index(&ns_handle, "lo").await?;
ns_handle.link().set(LinkUnspec::new_with_index(lo_idx).up().build()).execute().await?;
let veth_idx = get_link_index(&ns_handle, "veth-ns").await?;
let addr2: IpNetwork = "10.0.0.2/24".parse()?;
ns_handle.address().add(veth_idx, addr2.ip(), addr2.prefix()).execute().await?;
ns_handle.link().set(LinkUnspec::new_with_index(veth_idx).up().build()).execute().await?;
// List links and addresses in namespace
let mut links = ns_handle.link().get().execute();
while let Some(link) = links.try_next().await? {
let name = link.attributes.iter()
.find_map(|a| match a {
netlink_packet_route::link::LinkAttribute::IfName(n) => Some(n.clone()),
_ => None,
})
.unwrap_or_default();
let mut addrs = ns_handle.address().get()
.set_link_index_filter(link.header.index).execute();
while let Some(addr) = addrs.try_next().await? {
if let Some(a) = addr.attributes.iter().find_map(|attr| match attr {
netlink_packet_route::address::AddressAttribute::Address(ip) => Some(ip),
_ => None,
}) {
println!("{}: {}/{}", name, a, addr.header.prefix_len);
}
}
}
Ok(())
}
async fn get_link_index(handle: &Handle, name: &str) -> Result<u32, Box<dyn std::error::Error>> {
let mut links = handle.link().get().match_name(name.to_string()).execute();
if let Some(link) = links.try_next().await? {
Ok(link.header.index)
} else {
Err(format!("link {} not found", name).into())
}
}
The rust-netlink ecosystem includes crates for ethtool, MPTCP, and nl80211 (wireless).
Who uses it: Netavark, Podman’s network backend, is written in Rust and uses rtnetlink. When you run rootful Podman containers, Netavark creates the veth pairs and bridges.
C: The Fragmentation Problem
Back in 2014, I listed libnl, libmnl, and libnetlink. A decade later, the dust has settled - but not into a single winner. The C ecosystem remains split, and OpenWrt shows why this matters.
OpenWrt ships four different netlink implementations on devices with 16MB of flash. They’d rather not - every duplicate kilobyte hurts when you’re trying to fit a full router OS into the space of a few photographs. But upstream projects never agreed on one library, so OpenWrt inherits the fragmentation:
libmnl - The minimal choice from netfilter.org, handling just netlink message construction. Required by the nft CLI, which OpenWrt’s firewall4 uses to load rules. Also used by nftables and conntrack-tools upstream.
libnl - The comprehensive library from infradead.org with nl80211 bindings for WiFi. Required by wpa_supplicant, hostapd, and NetworkManager.
libnl-tiny - OpenWrt’s own stripped-down libnl fork to save flash space, used by their netifd and odhcpd daemons for routing and DHCPv6.
ucode rtnl - Not a C library, but worth mentioning. OpenWrt developed ucode, a JavaScript-like scripting language, for their firewall4 rewrite. Its rtnl module talks netlink directly from script, used by wifi-scripts for interface setup. It’s their attempt to get scripting-language netlink access without the weight of a full Python or Lua runtime.
Each upstream C project made a reasonable choice in isolation. The result is fragmentation that every integrator inherits.
I’ve omitted C examples here - libmnl code for this scenario runs to several hundred lines. If you’re writing C, study the libmnl examples directly.
This is why the higher-level language bindings matter. Python, Go, and Rust each have one dominant netlink library. When you build on those, you’re not inheriting decades of C ecosystem disagreements.
Summary
| Language | Library | Used By |
|---|---|---|
| Python | pyroute2 | OpenStack Neutron |
| Go | vishvananda/netlink | Flannel, Multus CNI, AWS VPC CNI |
| Rust | rtnetlink | Netavark (Podman) |
| C | libmnl | nftables, conntrack-tools |
| C | libnl | wpa_supplicant, hostapd, NetworkManager |
The choice usually follows from what you’re building. Container tooling gravitates to Go because that’s where Kubernetes lives. Embedded systems use C because they must. Automation and testing use Python. Rust is gaining ground for new systems work.
But here’s what changed: if you’re starting a new project today, there’s a clear answer for each language. No more research paralysis, no more “which of these five half-maintained libraries should I use?” Pick the one that matches your language and get to work.
Ten years ago, I wanted a good netlink library for a scripting language. That library exists now - and so do equivalents for Go and Rust, battle-tested in production systems that run a significant chunk of the internet’s container infrastructure. The “Grand Unified Netlink Userspace” I mused about never happened, but something better did: each language community converged on its own winner. The ecosystem grew up.