How not to grab the headlines?

No-one needs to be convinced in 2019 to protect their network with a firewall. Just as there aren’t any company email systems running without virus protection in the 21st century. It is likewise self-evident that not everyone who works at a company is assigned system admin privileges. But what about container technology?

The last two decades have brought about many changes in terms of addressing user needs. It began with virtualisation in the mid-00s, when it became easier to assemble high availability clusters from virtual machines, which in turn were more flexible than any previous setup in satisfying the users’ requirements. Automation became easier as well. Previously costly and complicated services became cheaper and simpler, while operating these systems also required less in terms of human resources.

This was followed by the spread of container technology, which allowed the associated infrastructure to respond to changes with greater speed and flexibility, and with even less human interference. Containers set out to conquer the world in 2013, and nowadays it’s containers everywhere. Popularity eventually raises IT security issues, as a couple of high-profile scandals have illustrated. As the initial euphoria dissipated, experts have come to the usual conclusion—standards must be developed in order to ensure the safety and security of this rightly popular solution to the greatest possible extent.

Image

Before discussing any scandals, it is helpful to take a look at the concept of the image. To put it in simple terms, an image is a file at the heart of the container. When a container is launched, this image is executed. Images have their roots in the era of virtual machines, but back then they used complete operating systems. The novelty is that the images used in container technology are much smaller as they only contain the library needed for them to run, and basically one application. Each container has a single well-defined task in our system.

Now let’s take a look at the scandals that brought the security issues to light.

Grabbing the Headlines

The first case is from the field of traditional operations, and it is highly likely that it could have been avoided with automation (and the use of containers). In 2017, the US credit reporting agency Equifax experienced a breach of their system resulting in the theft of nearly 150 million clients’ personal data. The scandal was aggravated by the fact that it took over two months before their local operators noticed the breach, due to their network breach detection mechanisms functioning inadequately for months. Automatic updates had not been turned on either, and human error was the ultimate cause.

On the other hand, what happened to the image repository Docker Hub was definitely a container technology issue. Docker is a platform through which anyone can store and share the images they create. In May 2017, however, someone uploaded infected content. It took a whole year before the company operating the platform heeded warnings from security experts and removed the infected content. It then turned out that unsuspecting users mined $90,000 worth of cryptocurrency during this period. Since the Hub’s services are free, it has many users, resulting in millions of downloads for said content. What these users did not notice was that in addition to containing a useful application, the container also had a parallel function, using part of the processor and memory to mine cryptocurrency for someone else. It may have taken just five minutes for this individual to create the image, and they were never caught. That’s just how cryptocurrencies are.

In February 2019, a vulnerability compromised the runC low level operating environment. Since runC is fundamental to all popular container run-times, all computers running containers have been deeply affected by this vulnerability worldwide.

With the utilisation of this mistake, system administrator’s rights can easily be gained on the host computers making them exposed to hackers. The problem has since been detected and published by the creators of runC, who, at the same time released a patch to prevent the vulnerability. The only remaining question is, how many users are going to install the update and who will keep using the old version?

No Outdated Tools

But how can such cases be avoided? It is not complicated, it is not costly, but it does require attention and awareness. There are organisations that publish free descriptions of security recommendations, detailing the steps worth taking to secure systems.

While such solutions did already exist for operating systems and server applications, now they have also become available for container technology. It is of course not enough to download these resource materials, they must also be processed. Automated tools may assist us in accomplishing this. Complete solutions would of course require a specialist’s assistance, who may either be an external expert or an internal operator.

It is important to keep in mind that there are no outdated tools in security technology. “Old” and new solutions must be used side by side! If we neglect prior protection, and for instance our internal network becomes accessible to intruders, then that is where the attack will come from. Whatever has been important so far in terms of security technology will remain indispensable but new components need new tools.

Investigating vulnerability

There are also publicly accessible databases that collect and publish reported vulnerabilities. The description of the above mentioned runC vulnerability is available here: https://nvd.nist.gov/vuln/detail/CVE-2019-5736

We can utilise these databases, allowing us to automatically, statically check the content (as part of the pipeline) while creating container images, before such images are delivered to the end users.

This method, however, is not sufficient to eliminate all potential risks. This is why it is important that we also monitor the containers that are already up and running.

For these operations as well, there exists open source, free (as well as commercial) monitoring tools.

Companies must establish their own policies and protocols to define how the infrastructure should be affected in the event that they discover an error or a defect in the system. Should it just send out an alert (via email or text message)? Or should the security software intervene because of a serious defect, for instance by isolating the defective entity from the rest of the network?

Risks

 As we have seen, the container technology is not without security risks either, but it still has many advantages: cost efficiency, flexibility, and a great potential for automation. If you snooze, you will eventually lose. But security must never be neglected!

When it comes to our company, we use only reliable components, so when we build an infrastructure, we create it to be as secure as it can possibly be. Anyone who requests container services from us can be assured that it will come secured.

But new vulnerabilities are discovered every day. Even though it requires minimal manpower to maintain an automatic system, there are always some decisions that require a human input. That is why it is important to think about support and maintenance.

Like an Airbag

The solutions outlined above do not add significantly to the costs; rather, high expenditures tend to occur when a system is not secured and worst comes to worst. It’s like the airbag in your car; you can buy a car without one and hope to rely on your driving skills, but should a truck slam into you, your car and you yourself will sustain some serious damage.

It’s best to prevent this from happening.

Levente László
Written by

Levente László

Site Reliability Engineer

2019-05-14

How not to grab the headlines?

7 min

container

risk

image

security

Levente László

Docker

Other blog posts

Contact us

+36 1 611 0462

info@blackbelt.hu