Following on from the short introduction to unikernels, I wanted to write a few words on how we at Artirix do Continuous Delivery with OSv - a cloud operating system built on the unikernel principles.

We have an API component written in Scala, which has a test and build pipeline in Snap-CI. Fundamentally the component is very straightforward: it talks to a database, and exposes an API to be used by other components. Historically we had deployed this component as a Docker container, so we saw it as a prime candidate for our unikernel experiments.

The pipeline

Triggered on every commit to master, the pipeline we built looks roughly like this:

  1. Pull the code from Github
  2. Test it
  3. Build & assemble a fat jar to be run in the unikernel
  4. Launch an EC2 instance with a prebaked OSv image including its REST API component and the JVM
  5. Use OSv’s built-in REST API to send the jar to the running instance
  6. Again with the REST API, update the OSv boot command line to run our jar on next reboot
  7. Create a new AMI (Amazon Machine Image) based on the running unikernel instance
  8. Create a new EC2 Launch Configuration using the new AMI
  9. Update an EC2 Auto Scaling Group to use the new Launch Configuration
  10. Do a rolling update by stepping the desired capacity of the ASG
  11. Clean up old AMIs, Launch Configurations, and the intermediate instance we launched on step 4

Ta-da, a deployed unikernel! The runtime for this pipeline, from code commit to finished deployment, is about 17 minutes. This is fairly high, and could be optimised.

Note that our pipeline only deploys the component to an internal environment for now. We could trivially replicate steps 8-10 for any other environment (staging and production) we want to deploy to.

Our app ships logs to Loggly for later analysis. We've also had success including New Relic in the JVM.

Limitations with our approach

The minimum billing period for a launched AWS EC2 instance is one hour. Therefore, the above pipeline will cost us an extra instance hour on every deployment. Our first thought was to work around this by purchasing a Reserved Instance and paying everything up front - bringing the hourly price to $0. Unfortunately AWS advised us that fully upfront paid Reserved Instances are in fact pay-744-monthly-hours-up-front instances, so when the number of monthly deployments + the number of running hours from beginning of month reaches 744, we'd start paying normal on-demand instance prices. However, as the component runs on a t2.micro for now, the extra billed hour doesn't really matter.

Another thing we've recognised is the need for automated smoke testing, and optional rollback, of new deployments. While arguably this is key to any CD pipeline, perhaps even more so in this case; if our unikernel image fails to start for some reason, the Auto Scaling Group will keep retrying infinitely. Due to the EC2 billing trait featured above, this could become costly quite fast.

While our approach certainly has room for improvement, it has allowed us to quickly prototype packaging and deploying an existing application as a unikernel.

1 Comment