Sync elementary OS Calendar with your google calendar

Screenshot from 2018-03-11 12.58.53Syncing your google calendar with the elementary OS Calendar application is super simple, but not obvious. I had initially tried, then noticed that it wasn’t working and forgot about it. After all, I normally always use on a pinned browser tab on any computer or OS that I use.

First of all, create a new calendar of “Google” type, enter your google username (no need for “”):

Screenshot from 2018-03-11 13.02.47

You will be then asked for a password:Screenshot from 2018-03-11 13.04.43

Go to your google account settings and under the “Signing in to Google” section, you will find an “App passwords” setting.

Generate a new password for the elementary Calendar, then use it in the “Calendar authentication request” dialog.

And that is it.

Why I assembled another home server and what components I chose

Some time ago I assembled a fanless, silent, low-power server for my own use.

That machine runs many services that I use daily (plex, owncloud, openvpn, gitlab, influxdb, grafana, …) and offers endless possibilities (theoretically: in the real world, RAM runs out).

This thing is so useful that people started asking me to build one for them as well. I have the whole setup in ansible and all services running in docker, so going from a freshly installed Ubuntu to running plex, owncloud and openvpn on ZFS storage is a 1 hour job.

The more time consuming part is selecting the hardware!

So here’s some reference about what I bought for a machine that I assembled and configured recently and also some explanations about why I preferred (once again) assembling instead of buying commercial products.

Continue reading “Why I assembled another home server and what components I chose”

DNS configuration with VPNs and Ubuntu 17.04: working again

I must have hit this bug on the upgrade from 16.10 to 17.04.

Result: DNS queries not working if not using FQDNs for my VPNs.

Time to find a workaround and so here’s my new configuration (note: see the update part at the end of the post).

Before you go on: if you try this, you will lose DNS resolution before you finish this setup. This will break your connections. So, ensure that you are able to roll-back all the changes or don’t do this.

First, I got rid of the systemd-resolved service:

sudo systemctl disable systemd-resolved.service

And resolvconf along with it:

sudo systemctl disable resolvconf.service

Then I installed openresolv:

sudo apt install openresolv

At this point /etc/resolv.conf should not anymore be a link, but a real file. Remove the link so that NetworkManager will recreate a file at /etc/resolv.conf:

sudo unlink /etc/resolv.conf

I then ensured that NetworkManager was using dnsmasq for dns, that is, this line must be uncommented and present in /etc/NetworkManager/NetworkManager.conf:


Using NetworkManager itself for DNS resolution will also work (dns=default), but then you will hit the limit of resolv.conf (i.e. max. 3 nameservers will work).

Then, restart NetworkManager:

sudo systemctl restart NetworkManager

With this configuration search domains and nameservers work as expected and they can be configured in the NetworkManager GUI.

For more details about NetworkManager:

UPDATE: 09 Aug 2017

Since one of the VPNs was misconfigured and it was sending wrong nameserver information, I took this even more drastically and decided to manually manage the DNS setup via dnsmaq.

In addition to the above, I then uninstalled openresolve, and set


in /etc/NetworkManager/NetworkManager.conf to get full control over /etc/resolv.conf.

I installed dnsmasq and configured the nameservers for the domains there, e.g.:


with a plain and simple resolv.conf:

search example.domain

This setup requires some maintenance if I have to add new VPNs or change the nameservers for the existing ones, but at least it should be stable!

Post-Unity havoc: Gnome vs KDE vs Pantheon (Elementary OS) vs Budgie

Ubuntu will use GNOME as desktop environment starting with release 17.10 Artful Aardvark. The news is old (even though the original blog post by Mark Shuttleworth was referring to 18.04), but many Unity users are still on the look for the best next alternative.

Exit Unity

On my “for fun” computers I have tried, used, freely interchanged many desktop environments, using one until it was not interesting anymore and then switching to the next.

But for my “real work” computers, I have been using Unity consistently since its release.

The reason is that Unity has been the most “predictable” environment when coming to multiple screens, projectors, audio/microphone configurations and all those things that you need to work immediately when the meeting time comes.

Even if Unity will be maintained until 2021 (thanks to 16.04 LTS shipping it), I don’t like the idea of using “a thing of the past”.

It’s time then to evaluate some alternatives!

All the Desktop Environments (DE) evaluated in this post have been used by me for an extended period of time (at least several months each, some several years), either on the personal laptop or on my working machines. I will give my personal impressions of what I like and dislike about them.

Continue reading “Post-Unity havoc: Gnome vs KDE vs Pantheon (Elementary OS) vs Budgie”

How to setup isolated sftp-only access for untrusted users

This is a very common scenario: you want to setup SSH access for an untrusted user, but strictly limit his capabilities to SFTP (or scp).

Usually the requirements are just two:

  • The user can only access your machine to run the SFTP command, no other uses of the SSH service will be allowed for this user
  • The user can only access a very restricted environment and not break outside of it (so he cannot access files that he’s not supposed to access)

Depending on the degree of untrustyness, you may also want to avoid DoS attacks on the service, but for this, the best measure is the hardening of the whole SSH service.

Continue reading “How to setup isolated sftp-only access for untrusted users”

Five essential Docker containers for your home server

Running a server in your home network is a great addition to your digital life. There are countless of uses that you can make of it: from testing your personal projects to controlling home automation systems. But even if it is just for storing your files, like your personal DropBox, or for media services, a permanent point of presence completely under your control will give you extreme flexibility.

When it comes to services, I run everything in Docker and this is a list of the 5 applications that I find most useful for a home server.

Continue reading “Five essential Docker containers for your home server”

Setup your VPN in docker with OpenVpn in 5 minutes

Docker simplifies deployments so much that even setting up a VPN to a machine of your private network becomes trivial.

There are different docker containers running openvpn, but I am very thankful to kylemanna for his excellent image (available on the docker hub).

The usage of the image requires a few extra steps than just running it, because it can be conveniently used to run scripts to initialize the configuration and setup users, other than just running the service.

Here is a brief overview of the setup process (this just adds some context to the already comprehensive README that ships with the github repo):

  1. Use a “convenience” environment variable to store the path to your persistent storage location that will be bind-mounted to the container. You can also use a data volume as an alternative. For me, this is a ZFS filesystem (I create a ZFS filesystem for each container that requires persistent storage)
  2. Run an ephemeral instance (–rm) of the image to initialize the data directory of the container (ovpn-host should be the hostname of your openvpn server)
    docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig -u udp://ovpn-host

    This will create the following content in your data directory, that includes the openvpn server configuration (the server network will be

    drwxr-xr-x 2 root root 4096 Mar 7 10:14 ccd
    -rw-r--r-- 1 root root 629 Mar 7 10:14 openvpn.conf
    -rw-r--r-- 1 root root 611 Mar 7 10:14
  3. Run an interactive ephemeral instance of the image to generate the opevnpn CA certificate and server key (you will have to type your passphrase for the private key)
    docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn ovpn_initpki

    This will setup the pki directory:

    total 44
    -rw------- 1 root root 1172 Mar 7 10:40 ca.crt
    drwx------ 2 root root 4096 Mar 7 10:40 certs_by_serial
    -rw------- 1 root root 424 Mar 7 10:40 dh.pem
    -rw------- 1 root root 42 Mar 7 10:40 index.txt
    -rw------- 1 root root 21 Mar 7 10:40 index.txt.attr
    -rw------- 1 root root 0 Mar 7 10:40 index.txt.old
    drwx------ 2 root root 4096 Mar 7 10:40 issued
    drwx------ 2 root root 4096 Mar 7 10:40 private
    drwx------ 2 root root 4096 Mar 7 10:40 reqs
    -rw------- 1 root root 3 Mar 7 10:40 serial
    -rw------- 1 root root 3 Mar 7 10:40 serial.old
    -rw------- 1 root root 636 Mar 7 10:40 ta.key
  4. Run the VPN service: start and detach the container (-d) and map a host port to the UDP container port where the openvpn server process is listening (1194). In this example the host port will be 1195

    docker run -v $OVPN_DATA:/etc/openvpn -d -p 1195:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn

    The NET_ADMIN capability is needed so the container can setup the virtual tun0 interface to terminate the VPN tunnel.
    After this command, the VPN service is up and running on the specified host port (1195 in the example).

  5. Generate client configuration (i.e., add a user to the VPN). If you omit the nopass option, the client key will be encrypted with a passphrase.
    docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full vince nopass

    The client key will be in ${OVPN_DATA}/pki/private and the certificate in ${OVPN_DATA}/pki/issued

  6. Retrieve the client configuration to a local file:
    docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_getclient vince > vince.ovpn

Access the VPN with the client configuration file.

If you need to add more users, just repeat the last two steps to create a user configuration on the server and retrieve the ovpn file.

If this isn’t just a test, you may want to add a restart policy to this container (get the container ID with docker ps):

docker update --restart=always 

Giving away my MacBook Pro and going back to Linux

I’ve used GNU/Linux as my main and only desktop OS since more than 10 years (as a University student first and now at work).

I’ve always been fine with Linux and I never felt like I was missing something (except Skype, maybe – or a better multiscreen support on KDE). I would from time to time change my desktop environment to refresh the user experience when the UI started to become boring on the eyes (cycling between Unity, KDE, elementary, GNOME) and that purely aesthetic freedom felt great.

But I kept seeing a growing number of colleagues around me using Apple laptops. Were/Are they all blindly following the flavor of the week? No, I don’t think so, they’re smart people. I concluded that there had to be something really better about these MacBooks.

So I ordered one as my new work machine…

…and my experience with it has been positive (yes, past tense, it’s already over). Everything is there and works, the Desktop UI is very consistent (you notice this after so many years of Linux desktop :D), it’s pleasant to use and touch.

I’ve used it for 8 months and then I gave it back and ordered a System76 Lemur to re-embrace Linux.

From a professional point of view, to get the work done, there was really no substantial difference in using Mac OS or Linux. I use some terminals and an editor most of the times, then a browser, simplenote, HipChat. Basically all the software that I need is available and works well on both. My workflow is the same on the two OSs.

So why did I bother going back? I mean, if productivity is the same, I could have saved myself the inconvenience to make another switch.

Here are my reasons in order of importance:

  1. Open Source. I depend on open source, most of the engineers in tech do (Software engineers, DevOps, SysAdmins, etc). And this is not only for what runs on the servers. We all run plenty of open source tools on our laptops and our productivity is heavily determined by them. How would you work without git? Then filesystems, programming languages (Python, C/C++ compilers, scala, …), command-line utilities, Jenkins, Docker, VirtualBox, owncloud, VLC, transmission, … Open Source is so fundamental for me that the least that I can do is giving back. And this means fully embracing it and trying to contribute to some of the projects that I use. So, yes, my first reason for going back to Linux is ethical.
  2. Freedom. Simply put, if you don’t like something on your Desktop, make it better for you and everyone else. This is only possible with open source projects.
  3. Clear terms. Your main desktop OS comes with clear terms. You know what applications will do and if you have doubts, have a look at the source code.

That’s it.

As I said, “there was really no substantial difference” to get the work done, so the determining factor for going back to Linux had to be on some other – non technical level.

If you think that Linux on the desktop sucks, than start using it and help your DE of choice become great.

Install owncloud with docker, ZFS and ansible

I’ve been happily running FreeNAS on my server for quite some time now, I am a fan of ZFS and I’ve used it to great advantage both at home and at work.

After its good service and with ZFS on Linux becoming stable, I have decided to move away from FreeNAS and install a plain Ubuntu 16.04 server image to play with docker and ansible and enjoy destroying and re-spawning my ephemeral service instances in seconds!

Continue reading “Install owncloud with docker, ZFS and ansible”

Quickly setup powerline for bash in ubuntu

Powerline can do many things and it is very configurable.

I wrote a small script in case you just want to power-up your bash prompt and do it without a lot of manual steps.

This is what you will get:

Get it done!

Read the README file first, then:

(you may need to install git first, or just copy and paste the script somewhere from the repo… it will anyway install git)

git clone
cd ubuntu-powerline-bash

Feel free to improve the script.

Repository on GitHub: