Developing for Snappy Ubuntu from any distro using LXC

Introduction

Lately I have been working a lot with Snappy Ubuntu Core. It’s a very nice, minimal, transactionally updated Ubuntu for IoT devices. Being an ArchLinux user I got annoyed by the fact that I need to have an Ubuntu host for developing. I started developing from an Ubuntu VM running inside my ArchLinux host.

As many of you probably already know working with Virtual Machines is slow, cumbersome and a pain since among others you have to recreate your host’s development environment.

A much more attractive alternative is an LXC container which is what this post will address. Our host system will be an ArchLinux and we will setup an Ubuntu 15.10 LXC to develop for Snappy Ubuntu core in. LXC creation for other Linux distros should be similar so the post can definitely be followed up to a point. Note that this guide is also largely based on the relating post on LXC in the ArchLinux wiki.

lxc.png

Setting up the host

Getting required packages

We will need lxc, arch-install-scripts and bridge-utils.

user@host:$ sudo pacman -S lxc arch-install-scripts bridge-utils

To ensure that your lxc is appropriately configured run:

user@host:$ lxc-checkconfig

--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: missing
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled

--- Checkpoint/Restore ---
checkpoint restore: missing
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

If you see output like the above then your lxc setup should be okay. Notice that User namespace is missing. That should only be the case if you run on ArchLinux since here it’s not currently supported at the moment of this post’s writing. That will mean we have to always run all lxc commands with root privileges.

Create the Container

In order to create our container we base it off the download template which gives us the option to choose from different OS/ARCH combinations. Below you can see the appropriate command and the selections you need to make in order to obtain Ubuntu Willy (15.10) for the amd64 architecture.

user@host:$ sudo lxc-create -t download -n snappydev

Setting up the GPG keyring
Downloading the image index

---
DIST    RELEASE ARCH    VARIANT BUILD
---
centos  6       amd64   default 20160129_02:16
centos  6       i386    default 20160129_02:16
centos  7       amd64   default 20160129_02:16
debian  jessie  amd64   default 20160128_22:42
debian  jessie  armel   default 20160111_22:42
debian  jessie  armhf   default 20160111_22:42
debian  jessie  i386    default 20160128_22:42
debian  sid     amd64   default 20160128_22:42
debian  sid     armel   default 20160111_22:42
debian  sid     armhf   default 20160111_22:42
debian  sid     i386    default 20160128_22:42
debian  squeeze amd64   default 20160128_22:42
debian  squeeze armel   default 20150826_22:42
debian  squeeze i386    default 20160128_22:42
debian  wheezy  amd64   default 20160128_22:42
debian  wheezy  armel   default 20160111_22:42
debian  wheezy  armhf   default 20160111_22:42
debian  wheezy  i386    default 20160128_22:42
fedora  21      amd64   default 20160129_01:27
fedora  21      armhf   default 20160112_01:27
fedora  21      i386    default 20160129_01:27
fedora  22      amd64   default 20160129_01:27
fedora  22      armhf   default 20160112_01:27
fedora  22      i386    default 20160129_01:27
gentoo  current amd64   default 20160129_14:12
gentoo  current armhf   default 20160111_14:12
gentoo  current i386    default 20160129_14:12
opensuse        12.3    amd64   default 20160129_00:53
opensuse        12.3    i386    default 20160129_00:53
oracle  6.5     amd64   default 20160129_11:40
oracle  6.5     i386    default 20160129_11:40
plamo   5.x     amd64   default 20160129_21:36
plamo   5.x     i386    default 20160129_21:36
ubuntu  precise amd64   default 20160129_03:49
ubuntu  precise armel   default 20160112_03:49
ubuntu  precise armhf   default 20160112_03:49
ubuntu  precise i386    default 20160129_03:49
ubuntu  trusty  amd64   default 20160129_03:49
ubuntu  trusty  arm64   default 20150604_03:49
ubuntu  trusty  armhf   default 20160112_03:49
ubuntu  trusty  i386    default 20160129_03:49
ubuntu  trusty  ppc64el default 20160129_03:49
ubuntu  vivid   amd64   default 20160129_03:49
ubuntu  vivid   arm64   default 20150604_03:49
ubuntu  vivid   armhf   default 20160112_03:49
ubuntu  vivid   i386    default 20160129_03:49
ubuntu  vivid   ppc64el default 20160129_03:49
ubuntu  wily    amd64   default 20160129_03:49
ubuntu  wily    arm64   default 20150604_03:49
ubuntu  wily    armhf   default 20160112_03:49
ubuntu  wily    i386    default 20160129_03:49
ubuntu  wily    ppc64el default 20160129_03:49
ubuntu  xenial  amd64   default 20160129_03:49
ubuntu  xenial  armhf   default 20160112_03:49
ubuntu  xenial  i386    default 20160129_03:49
---

Distribution: ubuntu
Release: wily
Architecture: amd64

Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=wily, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

As you can see from above type: ubuntu, wily, amd64 and wait until the proper image and root file system has been downloaded. Once done, your container will have been created, but is not yet running.

Setting up networking

Before we proceed any further we will need to address the networking issue. Our container won’t have any networking capability if we leave it just like that so let’s get to work.

We will need to setup a network bridge between the host and the container. There are two main ways to accomplish this and they are quite different. One is with netctl which I would recommend only if you are using a wired connection and the other is with systemd-networkd if you are using a wifi connection. The second way can also be used for a wired connection, if you are already using systemd-networkd.

Wired Connection

For a wired connection you can use netctl so make sure you have installed the appropriate package.

user@host:$ sudo pacman -S netctl
Setup network bridge with netctl

To create a bridge with netctl create the file /etc/netctl/lxcbridge and paste the following into it:

Description="LXC bridge"
Interface=br0
Connection=bridge
BindsToInterfaces=('eno1')
IP=static
Address=10.0.2.1/24
FwdDelay=0

In the BindsToInterfaces use the interface you would like to bind to. At Address you can just leave the same as what I have or use any valid address as long as you remember it, since it’s going to be needed to configure the LXC itself.

Now let’s switch to and start the bridge:

user@host:$ sudo netctl switch-to lxcbridge
user@host:$ sudo netctl start lxcbridge

If you do ifconfig right now you will be able to see that the bridge interface is up and running.

Wifi Connection

If you are using wifi then theoretically you can still try to use netctl as before to make the bridge work. There is even a guide you can use. Personally I had trouble making it work and by researching the topic it seems that it’s easier to achieve a network bridge for wifi if you are using systemd-networkd. If your systemd version is greater or equal to 210 then it should come with networkd built-in. Thankfully at the time of this post’s writing the systemd version of Archlinux is 228.

Also make sure you have the packages wpa_supplicant, dnsmasq

user@host:$ sudo pacman -S wpa_supplicant dnsmasq
Setup wireless network with networkd-systemd

I will assume you have never used networkd-systemd before. If you already have been using it succesfully then you can skip to the next section.

First of all disable any other service that was managing your wireless connection, like say Network Manager.

user@host:$ sudo systemctl disable NetworkManager

Then we can go ahead and start/enable the relevant systemd services:

user@host:$ sudo systemctl enable systemd-networkd.service
user@host:$ sudo systemctl start systemd-networkd.service
user@host:$ sudo systemctl enable systemd-resolved.service
user@host:$ sudo systemctl start systemd-resolved.service

For compatibility with the old /etc/resolve.conf, delete the old file and symlink the systemd equivalent in its place.

user@host:$ sudo rm -rf /etc/resolv.conf
user@host:$ ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Now add the network configuration under /etc/systemd/network/. Create a file with any name ending in the .network suffix. I like naming the file from the interface it’s made for. So for a wireless interface named wlp4s0 I create the file /etc/systemd/network/wlp4s0.network.

[Match]
Name=wlp4s0

[Network]
DHCP=yes

[DHCP]
RouteMetric=20

The important parts are the Name which should match the interface name, and the fact that we want DHCP. If you want a static IP this is where you would set it up. The RouteMetric is to give priority to a potential wired connection over the wireless one.

Now we have to enable the wpa_supplicant service for our wireless interface:

user@host:$ sudo systemctl enable wpa_supplicant@wlp4s0.service

Replace wlp4s0 with your interface name. This should create a configuration file: /etc/wpa_supplicant/wpa_supplicant-wlp4s0.conf. Make sure it contains the following in the beginning:

ctrl_interface=/run/wpa_supplicant
update_config=1
fast_reauth=1
ap_scan=1

After that the network SSID-passphrase combinations should follow. If you don’t know how to use wpa_supplicant to connect to a wireless connection then this article will certainly be useful to you.

Setup a network bridge with networkd-systemd

Now is time to create the actual bridge using networkd-systemd. Create the file /etc/systemd/network/lxcbr0.netdev and paste the following in there:

[NetDev]
Name=lxcbr0
Kind=bridge

The name can be anything but it will be what identifies this bridge. Also create /etc/systemd/network/lxcbr0.network and put the following in it making sure the name matches the one above.

[Match]
Name=lxcbr0

[Network]
IPForward=yes
IPMasquerade=yes

Subsequently enable and start the lxc-net service which will help us in setting up the bridge.

user@host:$ sudo systemctl enable lxc-net
user@host:$ sudo systemctl start lxc-net

Finally make sure to check that the contents of /etc/default/lxc closely match the following:

# LXC_AUTO - whether or not to start containers at boot
LXC_AUTO="true"

# BOOTGROUPS - What groups should start on bootup?
#    Comma separated list of groups.
#    Leading comma, trailing comma or embedded double
#    comma indicates when the NULL group should be run.
# Example (default): boot the onboot group first then the NULL group
BOOTGROUPS="onboot,"

# SHUTDOWNDELAY - Wait time for a container to shut down.
#    Container shutdown can result in lengthy system
#    shutdown times.  Even 5 seconds per container can be
#    too long.
SHUTDOWNDELAY=5

# OPTIONS can be used for anything else.
#    If you want to boot everything then
#    options can be "-a" or "-a -A".
OPTIONS=

# STOPOPTS are stop options.  The can be used for anything else to stop.
#    If you want to kill containers fast, use -k
STOPOPTS="-a -A -s"

USE_LXC_BRIDGE="true"  # overridden in lxc-net

[ ! -f /etc/default/lxc-net ] || . /etc/default/lxc-net

Especially assert that USE_LXC_BRIDGE= is set to true.

Miscellaneous network requirements

Since we are now using a network bridge we have to make sure that the kernel parameter that allows packet forwarding between interfaces is enabled.

user@host:$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
user@host:$ sudo sysctl net.-w ipv4.ip_forward=1
net.ipv4.ip_forward = 1

As shown above, query the kernel value with sysctl and if it is 0 then enable. Finally use the following command to enable the host to perform NAT using iptables:

user@host:$ sudo iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE

Make sure to replace eno1 with the interface that your host is actually using.

Configuring the Container

The configuration file of the container is located at /var/lib/lxc/snappydev/config where snappydev is the name you gave to your LXC container. It should be almost empty.

General and networking attributes of the config file

Below is a snapshot of the config file, where networking has also been taken care of:

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

# Distribution configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.arch = x86_64

# Container specific configuration
lxc.rootfs = /var/lib/lxc/snappydev/rootfs
lxc.utsname = snappydev

# Network configuration
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
# lxc.network.ipv4 = 192.168.100.10/24
# lxc.network.ipv4.gateway = 192.168.100.1
lxc.network.name = eth0

Up until lxc.utsname it is auto-generated by the template, so no need to worry about it. After that comes the network configuration. You can copy it as is if you have configured the lxcbridge with netctl the same way I did.

If you have setup the bridge with a static IP then you can add the ip and the gateway as shown above with lxc.network.ipv4 and lxc.network.ipv4.gateway set to the same IP addresses as your had in your bridge configuration setup (/etc/netctl/lxcbridge for netctl and /etc/systemd/network/lxcbr0.network for systemd-networkd). I have them commented out since the guide assumes you use lxc-net or netcl with DHCP in which case you don’t need to specify IPs. Also make sure that lxc.network.link is the same as the Name field in the bridge configuration file.

ArchLinux and systemd Host specific configuration

If you are on an ArchLinux host like I am then you definitely need to add a few more entries at the end of the LXC configuration file:

# By default, lxc symlinks /dev/kmsg to /dev/console, this leads to journald running
# at 100% CPU usage all the time. To prevent the symlink we use this:
lxc.kmsg = 0
# Maintain devpts consistency
lxc.pts = 1024
## required for systemd
lxc.autodev = 1
lxc.hook.autodev=/var/lib/lxc/snappydev/autodev

Setting lxc.kmsg to 0 is required as the comment also explains in order to avoid a symlink that will lead to 100% CPU utilization.

Setting lxc.pts to 1024 ensures that devpts is mounted as a new instance and the container doesn’t just get the host’s instance which would have pretty negative results.

Setting lxc.autodev to 1 enables the container’s rootfs LXC to mount a fresh tmpfs under /dev (limited to 100k) and fill in a minimal set of initial devices. This is required only if the container has systemd based initialization, which Ubuntu 15.10 does.

Through the use of lxc.hook.autodev additional devices in the containers /dev directory are created. So in order to accomplish this create the file /var/lib/lxc/snappydev/autodev and paste the following inside:

#!/bin/bash
cd ${LXC_ROOTFS_MOUNT}/dev
mkdir net
mknod net/tun c 10 200
chmod 0666 net/tun

Also make sure that it is executable

user@host:$ chmod +x /var/lib/lxc/snappydev/autodev

This hook runs in the container’s namespace after mounting has been done and after any mount hooks have run, but before the pivot_root and creates some extra nodes that are needed under /dev so that systemd can properly work in the container.

Using the container

Now that we are done with the preparation of the host LXC setup we can finally enjoy the fruit of our labour and run the container.

How to start it

We can start the container with:

user@host:$ sudo lxc-start -n snappydev

If all went well you can see that the container is running by checking it with lxc-ls:

user@host:$ sudo lxc-ls -f
NAME       STATE    IPV4           IPV6  GROUPS  AUTOSTART  
----------------------------------------------------------
snappydev  RUNNING  192.168.1.164  -     -       NO

If it has not started properly then you should debug it by attempting to start the container in the foreground and with Debugging log priority:

user@host:$ sudo lxc-start -n snappydev -F --logpriority=DEBUG

This should provide enough information for you to find out what may be wrong with your container’s setup. One error that can be quite common is if your container initialization is stuck at:

[ ... ] Starting LSB: Raise network interfaces....

If you see that then please double check your networking configuration because that means you have made a mistake with it.

How to enter it

There are two ways to enter the container. lxc-console which gives you a login prompt, and lxc-attach which drops you directly at the root prompt.

Normally the default user should be ubuntu with password ubuntu but at the time of writing this post this seemed not to work for me. So we will have to go in the hard way. Go in as root and set the ubuntu user’s password.

user@host:$ sudo lxc-attach -n snappydev
group: cannot find name for group ID 19
root@snappydev:/# ls
bash:ls: command not found

ls not found? What trickery is this? Let’s see where does the $PATH point to

root@snappydev:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/bin/

Seems we have inherited the ArchLinux layout where binaries are under /usr/local/. That’s easy to fix. Simply add the following in the container’s /root/.bashrc to be able to access ubuntu binaries as root inside the container.

# inside the container we inherit, the PATH as set in archlinux and that's not gonna work for us in Ubuntu
export PATH="/bin/:/usr/bin/:/sbin/$PATH"

In any case the important part here is to set the ubuntu user’s password.

root@snappydev:/# passwd ubuntu
Enter new UNIX password
Retype new UNIX password
passwd: password updated successfully

Now from our host we can get a login as a normal user by typing:

user@host:$ sudo lxc-console -n snappydev

Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself

Ubuntu 15.10 snappydev pts/0

snappydev login: ubuntu
Password: 
Last login: Sat Jan 30 10:40:01 UTC 2016 on pts/0
ubuntu@snappydev:~$

One important note. When you log out of the container and want to quite the login prompt and go back to the host you may find that <Ctrl-C> or <Ctrl-D> does not work. The right key combination to use here would be <Ctrl-Q-a>

Conclusion – or a Snappy beginning

Now that we finally managed to enter the container we should get some common dev packages and add the snappy ppa so that we can download the snappy tools.

ubuntu@snappydev:~$  sudo apt-get install software-properties-common
ubuntu@snappydev:~$  sudo apt-add-repository ppa:snappy-dev/tools
ubuntu@snappydev:~$  sudo apt-get update
ubuntu@snappydev:~$  sudo apt-get install snappy-tools

From here and on you can treat the container like a normal Ubuntu 15.10 host and follow the Snappy Guide to get started or if you are feeling a bit more adventurous and want to experiment with Snappy Ethereum you can read the tutorial I wrote on it.

As usual if you have any feedback/comments/suggestions on the post feel free to leave a message down here. Till next time!

Leave a Reply

Your email address will not be published. Required fields are marked *