package current-albatross-deployer

  1. Overview
  2. Docs
An ocurrent plugin to deploy MirageOS unikernels

Install

Dune Dependency

Authors

Maintainers

Sources

current-albatross-deployer-1.0.0.tbz
sha256=2ea909d9f114ce2b67a22c9e0f84826d01fd09ede2437623eab6e4d6ebd4020b
sha512=634337fa5eef32e26aac32e61001f7fed92885b7382f3710b68eb001c3e9edf66eb84c4a1aa6257b1a63349377360dea5f8689aa895cb9b072897e56ad2d4710

Description

This is an ocurrent plugin to manage deployment of unikernels. It's specialized for linux, using Albatross for orchestrating the virtual machines and iptables for exposing ports.

It's been made with zero downtime in mind, meaning that when an unikernel is updated, a new instance is started while keeping the old one alive, and the switch to the new instance is managed using a port redirection to the new IP.

Published: 16 Nov 2022

README

current-albatross-deployer

This is an ocurrent plugin to manage deployment of unikernels. It's specialized for linux, using Albatross for orchestrating the virtual machines and iptables for exposing ports.

It's been made with zero downtime in mind, meaning that when an unikernel is updated, a new instance is started while keeping the old one alive, and the switch to the new instance is managed using a port redirection to the new IP.

An example pipeline:

Installation

Using Opam

opam pin https://github.com/tarides/current-albatross-deployer

Installing current-iptables-daemon

git clone https://github.com/tarides/current-albatross-deployer
cd current-albatross-deployer
opam install --deps-only .
dune build
cd lib/iptables-daemon/packaging/Linux
sudo ./install.sh

The daemon runs as a systemd service named current-iptables-daemon.

Installing albatross

See https://github.com/roburio/albatross

Usage

This plugin provides ocurrent primitives to compose a pipeline. A bit of familiarity with ocurrent is therefore advised.

Prelude:

open Current_albatross_deployer

Step 1: build an unikernel

The entry point of the deployment pipeline is unikernel images, there are two ways of building them:

1.a: from Docker

Extracting the unikernel binary from a previously build docker image.

module Docker = Current_docker.Default

let image: Docker.Image.t Current.t = Docker.build ...

let unikernel: Unikernel.t Current.t = Unikernel.of_docker ~image ~location:(Fpath.v "/unikernel.hvt)
1.b: from Git
module Git = Current_git

let repo: Git.Commit.t Current.t = Git.clone "https://github.com/mirage/mirage-www"

let unikernel: Unikernel.t Current.t =
    let mirage_version = `Mirage_3 in
    let config_file = Fpath.v "/src/config.ml" in
    Unikernel.of_git ~mirage_version ~config_file repo

Step 2: configure the unikernel

The unikernel pre-configuration is made of the unikernel image, a service name, runtime arguments as a function of the unikernel's ip, a memory limit and a deployment network:

let config_pre: Config.Pre.t Current.t =
    let+ unikernel = unikernel in
    {
        Config.Pre.service = "website";
        unikernel;
        args = (fun ip -> ["--ipv4="^(Ipaddr.V4.to_string ip)^"/24"]);
        memory = 256;
        network = "br0";
    }

Note that the let+ operator from Current.Syntax allowing to map an Unikernel.t to its pre-configuration.

Then the pre-configuration can be used to allocate an IP, two different configurations would generate different IPs. This IP is then used to obtain the configured unikernel:

let ip: Ipaddr.V4.t Current.t =
    let blacklist = Ipaddr.V4.of_string_exn "10.0.0.1" in
    let prefix = Ipaddr.V4.Prefix.of_string_exn "10.0.0.1/24" in
    get_ip ~blacklist ~prefix config_pre

let config: Config.t Current.t =
    let+ ip = ip
    and+ config = config
    in
    Config.v config ip

Note that the IP could be used to configure other unikernels, such as in a chain of microservices. The example/ folder demonstrates such a chain, and shows how to implement zero-downtime updates by using a custom staging module allowing to have two co-existing unikernel chains.

Step 3: deploy and monitor the unikernel

This part of the pipeline interacts with albatross to create an unikernel and monitor it.

let deployed: Deployed.t Current.t = deploy_albatross config

let monitor: Info.t Current.t = monitor deployed

Step 4: publish and expose ports

When the unikernel is created it can be exposed to the internet via the host machine by setting up NAT forwarding. The iptbles-daemon module takes care of that, creating a CURRENT-DEPLOYER chain in the nat table to set up redirection rules. Here, external port 8080 is redirected to the unikernel port 80.

let published: Published.t Current.t =
    let service = "website" in
    let ports = [{Port.source = 8080; target = 80}] in
    publish ~service ~ports deployed

Step 5: garbage collection

Finally, a collect primitive is available to free IPs and albatross VMs when they are not useful anymore. For now, it's a process that requires manual activation:

let collected: unit Current.t = collect (Current.list_seq [published])

To do

  • Use iptables forward rules to provide more isolation between unikernels as they usually live on the same network.

  • Better handle errors (see TODOs in the code)

Contributing

Take a look at our Contributing Guide.

Acknowledgement

current-albatross-deployer has received funding from the Next Generation Internet Initiative (NGI) within the framework of the DAPSI Project

Dependencies (16)

  1. rresult >= "0.6.0"
  2. ppx_deriving_yojson >= "3.6.1"
  3. ppx_deriving >= "5.2.1"
  4. lwt >= "5.6.0"
  5. logs >= "0.7.0"
  6. ipaddr >= "5.2.0"
  7. current_docker >= "0.5"
  8. current >= "0.5"
  9. cstruct >= "6.0.1"
  10. cmdliner >= "1.1.0"
  11. bos >= "0.2.0"
  12. asn1-combinators >= "0.2.6" & < "0.3.0"
  13. dune >= "2.9.0"
  14. ocaml >= "4.08.0"
  15. obuilder-spec >= "0.5"
  16. albatross >= "1.5.1" & < "1.5.5"

Dev Dependencies (3)

  1. alcotest >= "1.4.0" & with-test
  2. current_web with-test
  3. odoc with-doc

Used by

None

Conflicts

None