Containing Containers to Contain Security Threats

Max Finn
5 min readJul 13, 2020
Much like the compartments of a mouse habitat, containers should only accessible through deliberate, specific ways that we decide.

Security is probably one of the most important aspects of anything in web development. You want your users to be comfortable sending their data and feel confident that what they’re getting from your site is really from you. However, by dockerizing your web apps, you’re essentially giving up some of the connectivity between your different services, and their communication must be heavily monitored. Some container services also don’t work with others. If you’re not familiar with linux, you’re most likely working with a foreign environment, too. So, how does Docker maintain your data integrity while still being efficient enough to host many different services communicating in a plethora of different ways?

Containers work through continuous integration of security measures. This is similar to services like TravisCI and CircleCI(I’ll leave it as a challenge to the reader to guess what CI in these means). Through every iteration of your code on the version manager of your choice, programs make sure your code is working and secure. Services optimized for Docker do something similar in a three-fold way:

  1. They secure the container pipeline and the application, meaning you can be confident that what is running in your Docker image is only what you put there,
  2. Securing the container deployment environment(s) and infrastructure, and
  3. Integrating with enterprise security tools and meeting or enhancing existing security policies, meaning they’ll even try to work with the features you already have in place!

Through these features, Docker allows itself to be extremely flexible while maintaining a professional level of security. However, static security policies are not scalable, presenting more challenges to container security as our services grow bigger. The best way to do this is to build security into the container pipeline so your applications have a focus on authentication from the ground up, allowing you to be sure you can trust the data at every step of the its lifecycle.

And the cycle continues…

Before getting into how to build security in, you need to understand what exactly containers are and what they’re doing. Docker containers are built from layers of files, better known as container images, the main one being the base image from which all other images are derived. At the very beginning of building a secure container from the ground up is making sure your images are trustworthy. As you add more apps and variables to your containers, making sure each is doing exactly what they say they are is imperative to secure containers. When adding more moving parts, consider this:

  1. Are the container images signed and from trusted sources?
  2. Are the runtime and operating system layers up to date?
  3. How quickly and how often will the container be updated?
  4. Are the known problems identified, and how will they be tracked?

Once you’ve considered these factors, the next step is to manage access to changing these. Once you’ve validated the authenticity of an application, you don’t want to worry about someone else creating exploitable holes in your images. This is typically done through a private registry, such as a Github repository, that allows you to assign roles and track known exploits in the services you use. Version control managers like Git are highly recommended anyways, as they allow you make changes to your codebase while knowing you can always rewind if need be(and they also provide some other cool features, like checking signatures, making sure sensitive data isn’t stored in the code, and more!)

Finally, automating security is the best way to consistently deploy secure containers without having to fear that new patches bring new exploits. Containers are most secure when completely rebuilt after changes, but there’s even ways to trigger automatic container rebuilds. More advanced tools to weed out more underlying issues are quintessential for this step, as we need to be able to trust that we know when something goes awry. Policy-based deployment is also necessary, as we’ll want to know in what manner we’re deploying, when, and how so we can be alert and prepared.

So now you have a secure app, you know it’ll be secure on deployment, and we’ll be informed of any problems along the way. Sadly, this doesn’t mean it’s airtight. Zero-day vulnerabilities exist, exploit flaggers aren’t perfect, and any service is possibly susceptible to social engineering attacks. Cyber security is an ongoing process that can never be truly automated, despite human error causing many of the problems we face. Defending your existing infrastructure from attacks is the best way to make sure that you’re constantly ready and adapting to the threat landscape. The first thing you can do is make sure that the base image is as isolated as possible, limiting the avenues of attack. Next, make sure that only the containers that need to communicate with one another are connected, making it much more difficult for a compromised container to be the exploit for more secure ones. After that we’ll want to control what gets access to our general resources, such as the network and databases. Essentially, we’re making it so that behind every layer of security is more layers, and any attempts to get access require getting through as many layers as possible before our app is truly compromised.

Finally, the last step. Keen minds might be sensing a pattern here: preparation. We need to make sure that we’re ready for updates in any of the software we’re using, and that we’re ready to scale as our project grows.

Services like the aforementioned TravisCI and CircleCI are great for making sure that every new deployment is running smoothly and securely. Red Hat is another great service, specifically designed for container security and optimizes integration. Their linux distribution was made with devops and stability in mind, while seamlessly integrating with their other services for a streamlined experience. They are also the source for much of the information in this article, so make sure to check them out for more information.

Securing full-scale applications and projects is no easy task, especially since it’s never perfect. However, through secure practices in devops and beyond, we can prepare ourselves for the worst contingencies and salvage some peace of mind. Services that keep us updated on the status of our code also save the headache of manual checking, and automatic rebuilds of our containers let us know that we’re up-to-date with the latest security features without patching over old versions and causing us problems down the road. I absolutely love the versatility of containers and what they allow us to do, especially when it saves us some of our very finite energy as software developers(like when trying to get the code to work on another machine, ‘it runs locally!’).

Sources:

--

--

Max Finn

I'm a passionate backend engineer writing about my code projects so that I can make it a little easier on myself(and hopefully you) later.