Crypto Price Ticker on the System Tray

Introduction

Like many other people I love customizing my Linux machine’s system tray. I am using the i3 Window Manager and the conky system monitor in order to show various interesting stats on the system tray. From the CPU temperature and current time, to the amount of I/O the system is performing at any given time.

This is a short post to showcase how this can be achieved and specifically to show how we can also add price tickers for your favorite crypto coins on the system tray.

Configuration

I will assume that the reader is using i3 and is familiar with its setup process. If you are not using i3 that is also fine since conky can send and display information either in pure text form (which we will use from i3) or using simple progress bars and graph widgets, with different fonts and colours. Check the conky user configuration wiki for more information, examples and screenshots.

i3 and Conky Setup

In order to use conky on the i3 bar, install its package and make sure to have the following inside your i3 config file (~/.i3/config):

# Start i3bar to display conky status
bar {
    font pango:DejaVu Sans Mono 9
    status_command ~/.local/bin/conky-i3-bar.sh
}

Essentially this sets the font to be used inside the i3-bar and also specifies a script to run that populates the contents of the bar.

#!/bin/sh

# Send the header so that i3bar knows we want to use JSON:
echo '{"version":1}'

# Begin the endless array.
echo '['

# We send an empty first array of blocks to make the loop simpler:
echo '[],'

# Now send blocks with information forever:
exec conky -c ~/.conkyrc

Above we can see the contents of ~/.local/bin/conky-i3-bar.sh. It’s a very simple script that notifies the i3-bar that we will be using JSON and then starts an endless JSON array by executing conky with the conkyrc config file.

out_to_x no
own_window no
out_to_console yes
background no
max_text_width 0

# Update interval in seconds
update_interval 2.0

# This is the number of times Conky will update before quitting.
# Set to zero to run forever.
total_run_times 0

# Shortens units to a single character (kiB->k, GiB->G, etc.). Default is off.
short_units yes

# How strict should if_up be when testing an interface for being up?
# The value is one of up, link or address, to check for the interface
# being solely up, being up and having link or being up, having link
# and an assigned IP address. 
if_up_strictness address

# Add spaces to keep things from moving about?  This only affects certain objects.
# use_spacer should have an argument of left, right, or none
use_spacer left

# Force UTF8? note that UTF8 support required XFT
override_utf8_locale no

# number of cpu samples to average
# set to 1 to disable averaging
cpu_avg_samples 2

# Stuff after 'TEXT' will be formatted on screen
TEXT

# JSON for i3bar

 [
{"full_text": " ❤ $acpitemp°C [$cpu%] ","color": 
              ${if_match ${acpitemp}<50}"\#007000"${else}"\#E60000"${endif}},
{"full_text": " I/O: $diskio", "color":"\#D683FF"},
{"full_text": " GPU: ${execi 60 nvidia-smi -q -d TEMPERATURE | grep Gpu | cut -c39-40}°C",
              "color": "\#3E63D1"},
{"full_text": " ≣ [$memeasyfree] ", "color":"\#B58900"},
{"full_text": " ⛁ / [${fs_free /}] ", "color": "\#99CC33"},
{"full_text": " ⛁ /home [${fs_free /home}] ", "color": "\#99CC33"},
{"full_text": " ≈ ${wireless_essid wlan0} [${wireless_link_qual_perc wlan0}%] ","color":"\#33CC99"},
{"full_text": " ☍ eno1 [${addr eno1}] ","color":"\#33CC99"},
{"full_text": " up [${uptime}] ", "color": "\#3399CC"},
{"full_text": " ${time %Y-%m-%d %H:%M:%S} "}
],

Pasted above you can see the contents of the conkyrc file I am using. The important part to customize is the JSON for the i3-bar section.

Essentially conky displays the contents of the map we are showing above. The key is always "full_text" and what matters is the value. In the snippet above you should keep entries only if you are interested in them.

You should replace eno1 and wlan0 with the name of your LAN and wireless interface respectively. Additionally if you do not have an nvidia card and/or don’t have the nvidia-utils package you can safely remove the nvidia entry.

All of the above will be updated every 2 seconds, except for the entries which have execi in front. Those allow the user to set the amount of seconds to be used as an interval between executions for that specific conky entry.

Adding a Crypto Price Ticker to conkyrc

In order to add a crypto price ticker to conkyrc, first we have to get the data from somewhere. For this example we will be using the well known cryptocurrency-exchange site kraken and its API.

Kraken uses a very simple REST API which allows you to make simple HTTP queries and get results in JSON format.

For example by going to https://api.kraken.com/0/public/Ticker?pair=ETHEUR you will get a JSON dictionary containing the query’s response. Below you can see an example:

{"error":[],"result":{"XETHZEUR":{"a":["10.99329","10","10.000"],"b":["10.91936","93","93.000"],"c":["10.91835","0.21264000"],"v":["120983.95570198","192533.01230294"],"p":["10.98003","10.75769"],"t":[3377,5114],"l":["10.39367","10.01880"],"h":["11.40000","11.40000"],"o":"10.49744"}}}

To learn what all the different values mean you can consult the API page linked above and you will see this explanatory table:

<pair_name> = pair name
    a = ask array(<price>, <whole lot volume>, <lot volume>),
    b = bid array(<price>, <whole lot volume>, <lot volume>),
    c = last trade closed array(<price>, <lot volume>),
    v = volume array(<today>, <last 24 hours>),
    p = volume weighted average price array(<today>, <last 24 hours>),
    t = number of trades array(<today>, <last 24 hours>),
    l = low array(<today>, <last 24 hours>),
    h = high array(<today>, <last 24 hours>),
    o = today's opening price

In order to add a price ticker we probably want the last closed trade. From the above example we need to extract "10.91835" from the json data. And due to the way conkyrc expects its entries, this operation should be an one-liner.

Using some awesome console magic we can avoid requiring any extra packages. We can achieve this by simply using curl and grep.

echo `curl -s https://api.kraken.com/0/public/Ticker?pair=ETHEUR`
| grep -Po '"c":.*?[^\\]",'
| grep  -Po '[0-9.]+'

So in the end the JSON part of the conkyrc becomes:

# JSON for i3bar

 [
{"full_text": " €/Ξ: ${execi 60 echo `curl -s https://api.kraken.com/0/public/Ticker?pair=ETHEUR` | grep -Po '"c":.*?[^\\]",' | grep  -Po '[0-9.]+'}", "color":"\#D683FF"},
{"full_text": " €/Ƀ: ${execi 60 echo `curl -s https://api.kraken.com/0/public/Ticker?pair=BTCEUR` | grep -Po '"c":.*?[^\\]",' | grep  -Po '[0-9.]+'}", "color":"\#D683FF"},
{"full_text": " ❤ $acpitemp°C [$cpu%] ","color": 
              ${if_match ${acpitemp}<50}"\#007000"${else}"\#E60000"${endif}},
{"full_text": " I/O: $diskio", "color":"\#D683FF"},
{"full_text": " GPU: ${execi 60 nvidia-smi -q -d TEMPERATURE | grep Gpu | cut -c39-40}°C",
              "color": "\#3E63D1"},
{"full_text": " ≣ [$memeasyfree] ", "color":"\#B58900"},
{"full_text": " ⛁ / [${fs_free /}] ", "color": "\#99CC33"},
{"full_text": " ⛁ /home [${fs_free /home}] ", "color": "\#99CC33"},
{"full_text": " ≈ ${wireless_essid wlan0} [${wireless_link_qual_perc wlan0}%] ","color":"\#33CC99"},
{"full_text": " ☍ eno1 [${addr eno1}] ","color":"\#33CC99"},
{"full_text": " up [${uptime}] ", "color": "\#3399CC"},
{"full_text": " ${time %Y-%m-%d %H:%M:%S} "}
],

One more thing to note is that we used execi 60 in order to not spam Kraken’s server and have our system’s IP blacklisted. The end result is a very nicely looking price ticker on the system monitor as we can see in the picture below.

systemtray.jpg

The example above used Ether and Bitcoin. Note that conky also accepts unicode and as such we are using the unicode symbols for both cryptocurrencies and Euro. Now while working you can be distracted by the price changes of your favorite crypto coins displayed on the system tray 🙂

Conclusion

We explored how to customize the system tray of a Linux machine using conky and giving a concrete example with the i3-bar. We also saw how to add a crypto price ticker by calling into an external server, querying for price data, processing it and populating conky’s endless JSON data that end up being displayed on the i3-bar.

How do you use the system tray in your setup? Do you also use conky? Got any cool scripts or screenshots to show? Got any feedback for the method outlined in this post? If so please don’t hesitate to post in the comments about it.


About the Author

profile2.png

Lefteris Karapetsas is a passionate developer/tinkerer currently located in Berlin.

After graduating from the University of Tokyo, Lefteris has been developing backend software for various companies including Oracle and Acmepacket. He is an all-around tinkerer who loves to takes things apart and put them back together learning how they work in the process.

His interests include language/compiler design, Artifical Intelligence, Robotics, Intelligent Systems and Systems programming. He feels at home with C code and GDB.

More recently he has gained a lot of blockchain expertise since he has been part of Ethereum as a C++ core developer since November 2014, having worked on Solidity, the ethash algorithm, the core client and the CI system. Additionally he is the tech lead of slock.it where he took part in connecting Blockchain and IoT with the Ethereum computer and the creation of the DAO.

 

Twitter: @lefterisjp

contact: lefteris@refu.co

Developing for Snappy Ubuntu from any distro using LXC

Introduction

Lately I have been working a lot with Snappy Ubuntu Core. It’s a very nice, minimal, transactionally updated Ubuntu for IoT devices. Being an ArchLinux user I got annoyed by the fact that I need to have an Ubuntu host for developing. I started developing from an Ubuntu VM running inside my ArchLinux host.

As many of you probably already know working with Virtual Machines is slow, cumbersome and a pain since among others you have to recreate your host’s development environment.

A much more attractive alternative is an LXC container which is what this post will address. Our host system will be an ArchLinux and we will setup an Ubuntu 15.10 LXC to develop for Snappy Ubuntu core in. LXC creation for other Linux distros should be similar so the post can definitely be followed up to a point. Note that this guide is also largely based on the relating post on LXC in the ArchLinux wiki.

lxc.png

Setting up the host

Getting required packages

We will need lxc, arch-install-scripts and bridge-utils.

user@host:$ sudo pacman -S lxc arch-install-scripts bridge-utils

To ensure that your lxc is appropriately configured run:

user@host:$ lxc-checkconfig

--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: missing
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled

--- Checkpoint/Restore ---
checkpoint restore: missing
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

If you see output like the above then your lxc setup should be okay. Notice that User namespace is missing. That should only be the case if you run on ArchLinux since here it’s not currently supported at the moment of this post’s writing. That will mean we have to always run all lxc commands with root privileges.

Create the Container

In order to create our container we base it off the download template which gives us the option to choose from different OS/ARCH combinations. Below you can see the appropriate command and the selections you need to make in order to obtain Ubuntu Willy (15.10) for the amd64 architecture.

user@host:$ sudo lxc-create -t download -n snappydev

Setting up the GPG keyring
Downloading the image index

---
DIST    RELEASE ARCH    VARIANT BUILD
---
centos  6       amd64   default 20160129_02:16
centos  6       i386    default 20160129_02:16
centos  7       amd64   default 20160129_02:16
debian  jessie  amd64   default 20160128_22:42
debian  jessie  armel   default 20160111_22:42
debian  jessie  armhf   default 20160111_22:42
debian  jessie  i386    default 20160128_22:42
debian  sid     amd64   default 20160128_22:42
debian  sid     armel   default 20160111_22:42
debian  sid     armhf   default 20160111_22:42
debian  sid     i386    default 20160128_22:42
debian  squeeze amd64   default 20160128_22:42
debian  squeeze armel   default 20150826_22:42
debian  squeeze i386    default 20160128_22:42
debian  wheezy  amd64   default 20160128_22:42
debian  wheezy  armel   default 20160111_22:42
debian  wheezy  armhf   default 20160111_22:42
debian  wheezy  i386    default 20160128_22:42
fedora  21      amd64   default 20160129_01:27
fedora  21      armhf   default 20160112_01:27
fedora  21      i386    default 20160129_01:27
fedora  22      amd64   default 20160129_01:27
fedora  22      armhf   default 20160112_01:27
fedora  22      i386    default 20160129_01:27
gentoo  current amd64   default 20160129_14:12
gentoo  current armhf   default 20160111_14:12
gentoo  current i386    default 20160129_14:12
opensuse        12.3    amd64   default 20160129_00:53
opensuse        12.3    i386    default 20160129_00:53
oracle  6.5     amd64   default 20160129_11:40
oracle  6.5     i386    default 20160129_11:40
plamo   5.x     amd64   default 20160129_21:36
plamo   5.x     i386    default 20160129_21:36
ubuntu  precise amd64   default 20160129_03:49
ubuntu  precise armel   default 20160112_03:49
ubuntu  precise armhf   default 20160112_03:49
ubuntu  precise i386    default 20160129_03:49
ubuntu  trusty  amd64   default 20160129_03:49
ubuntu  trusty  arm64   default 20150604_03:49
ubuntu  trusty  armhf   default 20160112_03:49
ubuntu  trusty  i386    default 20160129_03:49
ubuntu  trusty  ppc64el default 20160129_03:49
ubuntu  vivid   amd64   default 20160129_03:49
ubuntu  vivid   arm64   default 20150604_03:49
ubuntu  vivid   armhf   default 20160112_03:49
ubuntu  vivid   i386    default 20160129_03:49
ubuntu  vivid   ppc64el default 20160129_03:49
ubuntu  wily    amd64   default 20160129_03:49
ubuntu  wily    arm64   default 20150604_03:49
ubuntu  wily    armhf   default 20160112_03:49
ubuntu  wily    i386    default 20160129_03:49
ubuntu  wily    ppc64el default 20160129_03:49
ubuntu  xenial  amd64   default 20160129_03:49
ubuntu  xenial  armhf   default 20160112_03:49
ubuntu  xenial  i386    default 20160129_03:49
---

Distribution: ubuntu
Release: wily
Architecture: amd64

Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=wily, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

As you can see from above type: ubuntu, wily, amd64 and wait until the proper image and root file system has been downloaded. Once done, your container will have been created, but is not yet running.

Setting up networking

Before we proceed any further we will need to address the networking issue. Our container won’t have any networking capability if we leave it just like that so let’s get to work.

We will need to setup a network bridge between the host and the container. There are two main ways to accomplish this and they are quite different. One is with netctl which I would recommend only if you are using a wired connection and the other is with systemd-networkd if you are using a wifi connection. The second way can also be used for a wired connection, if you are already using systemd-networkd.

Wired Connection

For a wired connection you can use netctl so make sure you have installed the appropriate package.

user@host:$ sudo pacman -S netctl
Setup network bridge with netctl

To create a bridge with netctl create the file /etc/netctl/lxcbridge and paste the following into it:

Description="LXC bridge"
Interface=br0
Connection=bridge
BindsToInterfaces=('eno1')
IP=static
Address=10.0.2.1/24
FwdDelay=0

In the BindsToInterfaces use the interface you would like to bind to. At Address you can just leave the same as what I have or use any valid address as long as you remember it, since it’s going to be needed to configure the LXC itself.

Now let’s switch to and start the bridge:

user@host:$ sudo netctl switch-to lxcbridge
user@host:$ sudo netctl start lxcbridge

If you do ifconfig right now you will be able to see that the bridge interface is up and running.

Wifi Connection

If you are using wifi then theoretically you can still try to use netctl as before to make the bridge work. There is even a guide you can use. Personally I had trouble making it work and by researching the topic it seems that it’s easier to achieve a network bridge for wifi if you are using systemd-networkd. If your systemd version is greater or equal to 210 then it should come with networkd built-in. Thankfully at the time of this post’s writing the systemd version of Archlinux is 228.

Also make sure you have the packages wpa_supplicant, dnsmasq

user@host:$ sudo pacman -S wpa_supplicant dnsmasq
Setup wireless network with networkd-systemd

I will assume you have never used networkd-systemd before. If you already have been using it succesfully then you can skip to the next section.

First of all disable any other service that was managing your wireless connection, like say Network Manager.

user@host:$ sudo systemctl disable NetworkManager

Then we can go ahead and start/enable the relevant systemd services:

user@host:$ sudo systemctl enable systemd-networkd.service
user@host:$ sudo systemctl start systemd-networkd.service
user@host:$ sudo systemctl enable systemd-resolved.service
user@host:$ sudo systemctl start systemd-resolved.service

For compatibility with the old /etc/resolve.conf, delete the old file and symlink the systemd equivalent in its place.

user@host:$ sudo rm -rf /etc/resolv.conf
user@host:$ ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Now add the network configuration under /etc/systemd/network/. Create a file with any name ending in the .network suffix. I like naming the file from the interface it’s made for. So for a wireless interface named wlp4s0 I create the file /etc/systemd/network/wlp4s0.network.

[Match]
Name=wlp4s0

[Network]
DHCP=yes

[DHCP]
RouteMetric=20

The important parts are the Name which should match the interface name, and the fact that we want DHCP. If you want a static IP this is where you would set it up. The RouteMetric is to give priority to a potential wired connection over the wireless one.

Now we have to enable the wpa_supplicant service for our wireless interface:

user@host:$ sudo systemctl enable wpa_supplicant@wlp4s0.service

Replace wlp4s0 with your interface name. This should create a configuration file: /etc/wpa_supplicant/wpa_supplicant-wlp4s0.conf. Make sure it contains the following in the beginning:

ctrl_interface=/run/wpa_supplicant
update_config=1
fast_reauth=1
ap_scan=1

After that the network SSID-passphrase combinations should follow. If you don’t know how to use wpa_supplicant to connect to a wireless connection then this article will certainly be useful to you.

Setup a network bridge with networkd-systemd

Now is time to create the actual bridge using networkd-systemd. Create the file /etc/systemd/network/lxcbr0.netdev and paste the following in there:

[NetDev]
Name=lxcbr0
Kind=bridge

The name can be anything but it will be what identifies this bridge. Also create /etc/systemd/network/lxcbr0.network and put the following in it making sure the name matches the one above.

[Match]
Name=lxcbr0

[Network]
IPForward=yes
IPMasquerade=yes

Subsequently enable and start the lxc-net service which will help us in setting up the bridge.

user@host:$ sudo systemctl enable lxc-net
user@host:$ sudo systemctl start lxc-net

Finally make sure to check that the contents of /etc/default/lxc closely match the following:

# LXC_AUTO - whether or not to start containers at boot
LXC_AUTO="true"

# BOOTGROUPS - What groups should start on bootup?
#    Comma separated list of groups.
#    Leading comma, trailing comma or embedded double
#    comma indicates when the NULL group should be run.
# Example (default): boot the onboot group first then the NULL group
BOOTGROUPS="onboot,"

# SHUTDOWNDELAY - Wait time for a container to shut down.
#    Container shutdown can result in lengthy system
#    shutdown times.  Even 5 seconds per container can be
#    too long.
SHUTDOWNDELAY=5

# OPTIONS can be used for anything else.
#    If you want to boot everything then
#    options can be "-a" or "-a -A".
OPTIONS=

# STOPOPTS are stop options.  The can be used for anything else to stop.
#    If you want to kill containers fast, use -k
STOPOPTS="-a -A -s"

USE_LXC_BRIDGE="true"  # overridden in lxc-net

[ ! -f /etc/default/lxc-net ] || . /etc/default/lxc-net

Especially assert that USE_LXC_BRIDGE= is set to true.

Miscellaneous network requirements

Since we are now using a network bridge we have to make sure that the kernel parameter that allows packet forwarding between interfaces is enabled.

user@host:$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
user@host:$ sudo sysctl net.-w ipv4.ip_forward=1
net.ipv4.ip_forward = 1

As shown above, query the kernel value with sysctl and if it is 0 then enable. Finally use the following command to enable the host to perform NAT using iptables:

user@host:$ sudo iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE

Make sure to replace eno1 with the interface that your host is actually using.

Configuring the Container

The configuration file of the container is located at /var/lib/lxc/snappydev/config where snappydev is the name you gave to your LXC container. It should be almost empty.

General and networking attributes of the config file

Below is a snapshot of the config file, where networking has also been taken care of:

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

# Distribution configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.arch = x86_64

# Container specific configuration
lxc.rootfs = /var/lib/lxc/snappydev/rootfs
lxc.utsname = snappydev

# Network configuration
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
# lxc.network.ipv4 = 192.168.100.10/24
# lxc.network.ipv4.gateway = 192.168.100.1
lxc.network.name = eth0

Up until lxc.utsname it is auto-generated by the template, so no need to worry about it. After that comes the network configuration. You can copy it as is if you have configured the lxcbridge with netctl the same way I did.

If you have setup the bridge with a static IP then you can add the ip and the gateway as shown above with lxc.network.ipv4 and lxc.network.ipv4.gateway set to the same IP addresses as your had in your bridge configuration setup (/etc/netctl/lxcbridge for netctl and /etc/systemd/network/lxcbr0.network for systemd-networkd). I have them commented out since the guide assumes you use lxc-net or netcl with DHCP in which case you don’t need to specify IPs. Also make sure that lxc.network.link is the same as the Name field in the bridge configuration file.

ArchLinux and systemd Host specific configuration

If you are on an ArchLinux host like I am then you definitely need to add a few more entries at the end of the LXC configuration file:

# By default, lxc symlinks /dev/kmsg to /dev/console, this leads to journald running
# at 100% CPU usage all the time. To prevent the symlink we use this:
lxc.kmsg = 0
# Maintain devpts consistency
lxc.pts = 1024
## required for systemd
lxc.autodev = 1
lxc.hook.autodev=/var/lib/lxc/snappydev/autodev

Setting lxc.kmsg to 0 is required as the comment also explains in order to avoid a symlink that will lead to 100% CPU utilization.

Setting lxc.pts to 1024 ensures that devpts is mounted as a new instance and the container doesn’t just get the host’s instance which would have pretty negative results.

Setting lxc.autodev to 1 enables the container’s rootfs LXC to mount a fresh tmpfs under /dev (limited to 100k) and fill in a minimal set of initial devices. This is required only if the container has systemd based initialization, which Ubuntu 15.10 does.

Through the use of lxc.hook.autodev additional devices in the containers /dev directory are created. So in order to accomplish this create the file /var/lib/lxc/snappydev/autodev and paste the following inside:

#!/bin/bash
cd ${LXC_ROOTFS_MOUNT}/dev
mkdir net
mknod net/tun c 10 200
chmod 0666 net/tun

Also make sure that it is executable

user@host:$ chmod +x /var/lib/lxc/snappydev/autodev

This hook runs in the container’s namespace after mounting has been done and after any mount hooks have run, but before the pivot_root and creates some extra nodes that are needed under /dev so that systemd can properly work in the container.

Using the container

Now that we are done with the preparation of the host LXC setup we can finally enjoy the fruit of our labour and run the container.

How to start it

We can start the container with:

user@host:$ sudo lxc-start -n snappydev

If all went well you can see that the container is running by checking it with lxc-ls:

user@host:$ sudo lxc-ls -f
NAME       STATE    IPV4           IPV6  GROUPS  AUTOSTART  
----------------------------------------------------------
snappydev  RUNNING  192.168.1.164  -     -       NO

If it has not started properly then you should debug it by attempting to start the container in the foreground and with Debugging log priority:

user@host:$ sudo lxc-start -n snappydev -F --logpriority=DEBUG

This should provide enough information for you to find out what may be wrong with your container’s setup. One error that can be quite common is if your container initialization is stuck at:

[ ... ] Starting LSB: Raise network interfaces....

If you see that then please double check your networking configuration because that means you have made a mistake with it.

How to enter it

There are two ways to enter the container. lxc-console which gives you a login prompt, and lxc-attach which drops you directly at the root prompt.

Normally the default user should be ubuntu with password ubuntu but at the time of writing this post this seemed not to work for me. So we will have to go in the hard way. Go in as root and set the ubuntu user’s password.

user@host:$ sudo lxc-attach -n snappydev
group: cannot find name for group ID 19
root@snappydev:/# ls
bash:ls: command not found

ls not found? What trickery is this? Let’s see where does the $PATH point to

root@snappydev:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/bin/

Seems we have inherited the ArchLinux layout where binaries are under /usr/local/. That’s easy to fix. Simply add the following in the container’s /root/.bashrc to be able to access ubuntu binaries as root inside the container.

# inside the container we inherit, the PATH as set in archlinux and that's not gonna work for us in Ubuntu
export PATH="/bin/:/usr/bin/:/sbin/$PATH"

In any case the important part here is to set the ubuntu user’s password.

root@snappydev:/# passwd ubuntu
Enter new UNIX password
Retype new UNIX password
passwd: password updated successfully

Now from our host we can get a login as a normal user by typing:

user@host:$ sudo lxc-console -n snappydev

Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself

Ubuntu 15.10 snappydev pts/0

snappydev login: ubuntu
Password: 
Last login: Sat Jan 30 10:40:01 UTC 2016 on pts/0
ubuntu@snappydev:~$

One important note. When you log out of the container and want to quite the login prompt and go back to the host you may find that <Ctrl-C> or <Ctrl-D> does not work. The right key combination to use here would be <Ctrl-Q-a>

Conclusion – or a Snappy beginning

Now that we finally managed to enter the container we should get some common dev packages and add the snappy ppa so that we can download the snappy tools.

ubuntu@snappydev:~$  sudo apt-get install software-properties-common
ubuntu@snappydev:~$  sudo apt-add-repository ppa:snappy-dev/tools
ubuntu@snappydev:~$  sudo apt-get update
ubuntu@snappydev:~$  sudo apt-get install snappy-tools

From here and on you can treat the container like a normal Ubuntu 15.10 host and follow the Snappy Guide to get started or if you are feeling a bit more adventurous and want to experiment with Snappy Ethereum you can read the tutorial I wrote on it.

As usual if you have any feedback/comments/suggestions on the post feel free to leave a message down here. Till next time!