The modern method for delivering software applications to users is containers.
The increasing complexity of application deployments is what gave rise to containers a few years back. Most websites require a front-end user interface, a back-end database, a webserver to handle input/output, and an underlying operating system. Providing all of this at scale is challenging, and for a time was being met through deploying virtual machines (VMs) preconfigured with all the needed functionality for running the application.
However, VMs virtualize a computer’s hardware stack – which can be costly in terms of required resources (in particular, memory and processing power). Containers only virtualize the operating system (OS) and upward to the application layer. This means we can host multiple containers atop the OS kernel directly, and they’re far more lightweight. They start up faster and use only a fraction of the memory compared to booting an entire OS. From a financial standpoint, being lighter weight means reducing your cloud spend.
So, containers are very popular now, and Docker is the most popular container platform. Docker bundles an application’s code together with any required configuration files, shared code libraries, and all other dependencies required for the app to run.
Docker enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API. But in order to monitor and manage container lifecycles in a complex environment, you need a container orchestration tool. While Docker includes its own orchestration tool (“Docker Swarm”), most developers instead choose Kubernetes.
Kubernetes is a container orchestration platform that began at Google but turned into an open-source project. Kubernetes schedules and automates tasks to manage container-based architectures – including deploying, updating, provisioning, load balancing, health & status monitoring, and so forth.
If you’re using Docker, Kubernetes, and other modern technologies to deploy your applications, you may have worried about the security concerns associated with containers. That’s because, in the early days of containers, they had several well-publicized issues – including privilege escalation and malicious (or vulnerable) code in the base image.
Many of these issues have been addressed via different methods of container-hardening. A “Docker file,” for instance, defines and restricts what IPs, ports, and protocols the container can use for communication, thus reducing the potential attack surface. OWASP has also published a Docker security cheat sheet to guide deployments.
However, something missing from the OWASP guidance is the issue of attacks on running containers and how we can monitor for Indicators of Compromise (IOCs) – i.e., in the same manner as we can for regular servers, endpoints, network infrastructure, and cloud configurations.
Those deploying containers may even have wondered whether security tools, such as SIEM (Security Information and Event Management), can “see” what’s going on with their environments. Well, worry no more! SIEMs such as AT&T Cybersecurity’s AlienVault® USM Anywhere™ can watch for IOCs in Docker and Kubernetes.
In fact, containers actually make it easier in some cases to monitor things. In general, containers simplify log collection and ingestion. Cloud management tools such as Microsoft Azure Monitor and AWS CloudWatch automatically aggregate log messages, and we can feed these into the SIEM for alerting. This could eliminate the need to deploy a SIEM sensor agent inside a container itself for monitoring purposes and even expand visibility across the entire application.
Per AT&T Cybersecurity, Kubernetes correlation rules that we can monitor in USM Anywhere include the following:
- Known crypto mining image
- Potentially dangerous container command
- Known malicious Kubernetes pod
- New cluster role with exec permissions
- Kubernetes unauthenticated request allowed
- New privileged cluster role
- A user attached to a Pod
- New host network Pod
- Kubernetes service exposing resources
- New privileged container in a Pod
- New Pod using a sensitive volume
- Unauthenticated command execution in a Pod
- Kubernetes network policy disabled
- Kubernetes API made public
As part of Abacode’s Cyber Lorica™ deployment, our engineers work with our customers’ technical staff to understand their container environments, configure log ingestions, establish incident response protocols, and baseline the monitoring approach. Our analysts then watch that environment with eyes-on-glass 24/7/365 forever. Any alerts are investigated and escalated as required, and our customers have peace of mind that someone always has their back.
Download Our Free White Paper:
3 Simple Steps to Turn Your Cybersecurity Challenges into a Competitive Advantage