With the advent of IBM z/OS Container Extensions (zCX), most of the applications that run on Linux at present will also be able to run on z/OS as Docker containers. zCX will be able to run Linux on Z applications on z/OS. Application developers having no z/OS skills can easily create and deploy Linux on Z applications to run in z/OS Container Extensions. Because the applications will look like Docker applications to the developer, and not z/OS applications, no z/OS skills will be required; Docker and Linux knowledge are all that would be required.
Thus, with the advent of zCX and with more and more IBM Z customers looking towards exploiting the zCX capabilities on z/OS, it has become almost inevitable that we have at least a very basic understanding of the concepts of how DOCKER and containers work. This article tries to help do just that – understanding DOCKER, Containers and Containerization.
Let us first talk about what Docker is. Docker is a piece of technology that helps us with developing, deploying and running our applications. Imagine a world where you could create a package of an application that contains not only the application itself but also all the dependencies of the application. In other words, everything that the application needs to run, for example, system libraries, system utilities, a runtime for the application, the application’s source code or a compiled version of the application itself and everything else that the application depends upon, are packaged into a portable and executable format. That, in turn, means that the package can be taken, moved around and run wherever you want – it could be a server on premise, your laptop or somewhere in the cloud. It doesn’t matter where you run the application because the application is packaged with all its dependencies and it guarantees that the application will behave the same, no matter where you run it. This is exactly the benefit that Docker provides you and Docker achieves that by using a software container technology.
Now that we know what Docker is, let us shift gears and try to understand what containers are. Portable and executable packages that contain an application and all of the application’s dependencies are called Container Images. And, when we run and execute the container image, we get a container. To put it simply, containers are just processes on an operating system. And, Docker takes care of running the containers, i.e. processes, in isolation. That means that a container can be imagined having its own file system, users, process tree, networking stack and more. Containers must share the operating system’s Kernel. But, on the other hand, this also makes containers very light-weight and fast.
Are Containers and Virtual Machines (VMs) the same thing?
No, they are not, although both the technologies have some similarities. For example, both allow us to create a portable package for an application. Both can help us in isolating the different applications from each other. But the way each achieves this is fundamentally different.
Virtualization achieves isolation at the hardware level. A Hypervisor will help slice the physical resources like RAM, Disks, CPUs into logical pieces and will then assign some of the slices to a virtual machine. The virtual machine runs its own operating system with its own kernel in which we can then install our applications or run our services.
Containers, on the other hand, are simply processes on an operating system and the isolation is achieved by utilizing features of the operating system’s kernel.
As compared to the containers, virtual machines are heavy weight constructs. The virtual machine runs its own operating system with its own kernel. A virtual machine must go through a time-consuming boot process. The operating system will consume some of the resources that are assigned to the virtual machine. The operating system ships with a bunch of libraries, system utilities and services that the user might not need at all to execute an application.
Containers, on the other hand, are very light weight constructs ad share the operating system’s kernel. Containers are simply processes on an operating system and therefore, need no boot process to get them started. And, usually, the images or packages that power containers only ship with exactly what is needed to run the application.
How can Docker make a developer’s life easier?
As a Developer, it can be quite common to end up in what I prefer to call a Dependency Hell. It is very common that people develop multiple applications on the same system. And, all those applications will have dependencies and sometimes, those dependencies can interfere with each other. For example, I might have two applications that require a system library but they need the system library in different versions. You can also have external dependencies, for example, on a Database Management System. Application1 needs the database management system to run in version A and Application2 in version B. Docker can solve these dependency problems for us. From Docker, you can run external dependencies like a database management system inside a container. You can run one container for version A of the database management system and another container for version B. You can thus have your application talk to the database management system version it wants to talk to. Internal dependencies are packaged into the container images that we can use to run and develop our applications. Each application will get its own container image and we will be able to install the dependencies that we need for a specific application in the version we need the dependencies.
Onboarding new developers becomes very easy. They don’t need to install a ton of tools any more like a database management system, system libraries, runtime for a language, to name a few. All that would be needed is a Docker. When the new developers understand how Docker works, they will be easily able to use the tools that they are familiar with to build a new version of an application and finally, release it. With Docker, the developers can use easy-to-understand file formats that describe the steps that are necessary to get an application up and running – from installing dependencies to building an application and finally, starting it.
How many times have we seen developers running into an issue where changes have been made to an application, tested and everything works perfectly fine on the developers’ system/s; but, after passing the change to a tester or shipping it to the production environment, things start behaving differently all of a sudden and the application even failing in production? Well, countless times, may be. Docker helps solve the typical problem for us. Because the application would be packaged into a container image with all the application’s dependencies, we can be sure that the application behaves the same, no matter where it runs.
How can Docker make a DevOps Engineer or Sysadmin’s life easier?
A DevOps Engineer or Sysadmin must deal with a variety of applications, each having dependencies. So, that means that a DevOps Engineer or Sysadmin deals with installing and maintaining both the applications and their dependencies. With Docker, however, the developers will give their DevOps Engineer or Sysadmin a container image that contains everything that the application needs to run. So, the DevOps Engineer or Sysadmin would no longer need to take care of the application’s dependencies anymore. The DevOps Engineer or Sysadmin don’t even have to care about what language an application is written in. The DevOps Engineer or Sysadmin simply takes the container image and run containers based on the image.
In an ideal world, containers are stateless. That simply means that there would be nothing in the container you care about that should stop you from making the container disposable. This allows for building fault-tolerant systems. If a machine that runs your container dies, you can simply start new containers on a different machine. It also makes scaling way easier because scaling your application up and down just becomes a matter of starting and stopping containers.
We will once again delve into the classical problem of everything working perfectly fine in development, CI, or even in staging; but then taking the application to production results in things starting to break suddenly. With Docker, you can be sure that your application will behave the same, no matter where you run it. That is because the application and all its dependencies are shipped in a portable and executable package as the container image.
Are Containers and Containerization a new Technology?
No. In fact, containers have been around for decades and they exist in many forms on many different operating systems. However, historically, containers have been rather hard to use. And, here comes Docker with their innovation of providing the tooling around containers that makes it very easy to use containers and accessible to the masses. With Docker, it becomes very easy to put your application into a container image, share the container image, distribute it to your machines and run containers based on the image. So, in essence, what Docker gives you is the hugely popular tooling ecosystem around containers that makes using them easy and fun.
Currently, there are two types of Docker containers – WINDOWS containers and LINUX containers. And, because containers are simply processes on an operating system, the WINDOWS containers must run on WINDOWS and LINUX containers should run on LINUX. However, running LINUX containers does not mean that you will have to run LINUX natively on your system. Docker provides installers for WINDOWS and MAC OS that can automatically set up a virtual machine for you that runs LINUX. Docker itself is a client-server application and the server part of Docker can either run natively on your system if it runs LINUX or inside the virtual machine that runs LINUX. The Docker client will always be installed natively on your system and thus, all the commands that you execute will be the same, no matter where you run the server part of Docker.
Subhasish Sarkar is a Senior SQA Engineer, working at BMC Software India Pvt Ltd. He has 12+ years of relevant work experience in different IBM Z Mainframe Technologies. He is passionate and enthusiastic about Technology in general, the IBM Z Mainframe Platform in particular and IMS to be specific. Subhasish Sarkar is an IBM Z Champion (for 2020).
Follow on Twitter
Connect on LinkedIn
I want to know full details about mainframes Developer job
Whilst the concept of encapsulating an RDBMS runtime in a container is seductive, the glaring fact remains that the RDBMS it self may in fact be dependent upon the runtime version, that is to say ‘backwards incompatible’. No container can solve this conundrum.
Publishing the article as relevant to IBM mainframes is disingenuous as Linux (z/Linux) is not widely deployed in commercial and governmental locations.