Singularity HPC Container Platform

  • By Jamie Duncan
  • Sat 26 November 2016
  • Updated on Sat 26 November 2016

Over the next couple of weeks I’ll have a series of posts about things I found interesting at SuperCompute '16 in Salt Lake City. I got to go and help out at the Red Hat booth on the Exhibitor floor.

My first post in this series is about a technology that had a TON of buzz there, even though it’s only ~6 months old.

Singularity

Singularity is an HPC-specific application container.

Description From the Singularity Website

Singularity enables users to have full control of their environment. This means that a non-privileged user can “swap out” the operating system on the host for one they control. So if the host system is running RHEL6 but your application runs in Ubuntu, you can create an Ubuntu image, install your applications into that image, copy the image to another host, and run your application on that host in it’s native Ubuntu environment!

My Take from SC '16

Singularity is the creatiion of Greg Kurtzer out at Berkley Labs. Among other things Greg has helped create over the years is The CentOS Project. So this project certainly comes with a pedigree. Greg has done an amazing job of building a vibrant community around this project since its initial release ~6 months ago. It was probably the most talked about project at Super Compute, event though it didn’t have an actual presence itself at the conference.

I was considered the container guy for Red Hat’s team at Super Compute. Each day I gave at least one lightning talk around the core concepts of what modern Linux containers do and how they work. After every session, and consistently throughout the entire conference, people were talking about Singularity and how it compared with what I was talking about / what Red Hat was doing.

So what is it? Again, I’ll let the website describe it for me:

Key Features from the Singularity Website

Singularity also allows you to leverage the resources of whatever host you are on. This includes HPC interconnects, resource managers, file systems, GPUs and/or accelerators, etc. Singularity does this by enabling several key facets:

  • Encapsulation of the environment

  • Containers are image based

  • No user contextual changes or root escalation allowed

  • No root owned daemon processes

Is it a Container?

Well, that all depends on how you are defining container. In the sense that it makes your application more portable, then yes it is a container. Like SELinux is a container that is security-focused, and kernel Control Groups is a container that is resource-focused, Singularity is a container that is tailored to the current needs of High Performance Computing community.

Comparison with Linux Containers

I don’t think that Singularity and Linux Containers are currently in competition for the same workloads. I believe this because Linux Containers are designed to isolate processes to allow as many of them as possible to run on a given host. Singularity is designed to optimize the execution of HPC workloads on a given host. In HPC, you typicall do not (currently) see multiple jobs running on a single host. You might carve up your slurm cluster to allow for multiple jobs to run, but it’s one job at a time for a host.

With that said, it is the closest analogue that has more than 1000 twitter followers.

Table 1. Feature Comparison
Feature Linux Container (docker) Singularity

SELinux Mandatory Access Control

fully supported and integrated

No (at least not documented that I could find)

CGroup resource isolation note: HPC schedulers like Slurm and Univa Gride Engine make this marginally required, if at all

fully supported and integrated

No (at least in the documentation that I could find)

Fileystem Isolation

Yes, using mount namespaces

Yes, using mount namespaces

Network Isolation

Yes, using network namespaces

No

PID Isolation

Yes, using PID Namespaces

Not enabled by default but is possible to use

Memory Isolation

Yes, using IPC Namespaces

Not documented, but code does exist

Per-process hostname

Yes, using UTS namespaces

No

Dockerfile compatible

Yes

Yes

Portable Image Format

Yes

Yes

Optimized HPC Workflows

No

Yes

Cooked-in integration with other automation tools

Yes. Ansible, CloudForms and others

No.

Next Steps

I heard about Singularity for the first time less than a week ago. So I have a lot of code to poke through and hello-world style examples to build out.

But if I looked into the future with my HPC-rose-colored glasses on:

I would see OpenShift and/or CloudForms being able to create a Singularity Container, and then schedule that job by interfacing with an existing slurm (or Grid Engine or some other) cluster. This would allow for easy integration with other things that are currently manual or non-existent with Singularity, provide a proven User Experience/Interface with no direct node access, and allow for better visualization of Enterprise and HPC workloads in a single pane of glass. Imagine having per-job chargeback for your HPC Cluster!

Like I said, a little pie-in-the-sky, but the bits wouldn’t be (all that) hard to get going.

Summary

The team that is building Singularity has decades of combined experience being very, very good at executing high-end HPC workloads. They have built and are continuing to evolve a tool that makes that easier and more secure. More importantly, they are building a vibrant community around the tool.

It is not a Linux Container in the fashion of the tools that have been revolutionizing Enterprise IT for the past couple of years. But it doesn’t need to be. I tell people that HPC engineers are IT Ops guys turned up to 42. That’s what this tool is, as well.

Even though, I can’t wait to begin to have discussions with the Singularity team to figure out where the workflows and innovation that are happening in the Enterprise space can help make Singularity even better.