Part 1 of 4: The Basics
At the end of 2016 and lasting a few months into 2017, I completed a proof of concept port of a large Enterprise Application that had been running on the Amazon Web Service Cloud to Linux for System z. This was a Docker based application written in Java…so of course, it would be trivial to port. WRONG. While the application is in Java, it called many pieces of open source code. Much of that code hadn’t been ported to System z yet or wasn’t widely adopted. What I thought was a very simple exercise turned into a six month effort.
What I’d like to do, via a series of blog entries, is share my experience in the hope this might help some other organization decide to do a similar porting task. While I’ve been working with mainframes for decades, this was my first Linux porting experience. So I’ll be describing how this experience helped me to Master the Mainframe, though that title seems reserved for university students.
This could be a book, but by breaking it up, it might be easier to understand.
- The Basics: High level overview of the application, the development environment, the system set up required to begin the porting exercise and the scope of the port.
- The Good: The people who assisted and taught me, the things that ported easily and the simplicity of getting started via the Linux Community Developers system.
- The Bad: the new open source for System z, the modifications necessary to open source to run on z, the debug experience and the time necessary to complete the porting process.
- The Future and Value: Regardless of the bad experience, there is a great business value in getting these types of Enterprise Apps on System z.
The Basics
This entry is more about the basic desktop development environment and targeted production on x86 based cloud servers. This is the traditional development environment and primary target of the applications. I needed to fit in and work with this environment before I could ever consider doing the unique activities necessary for success on Linux for System z.
Application Overview
Because of the proprietary nature of the application and intellectual property, I’m not going to name the vendor or application. This overview of the workflow is simplistic, at best, so as to not give away any trade secrets. The vendor is an early start up with an application to handle biometric authentication in a marvelous way. This application has a callable interface to start a request and then, using cloud based services, does some communication with the end-user, does some analytics based on a number of system defined characteristics, logs a number of things for diagnostics, audits and future analytics, provides a go/no-go decision back to the original caller and has a number of applications and user interface applications to manage the cloud deployment. Finally, they have an enormous test suite to emulate and automate the entire end to end workflow.
Development Environment
This vendor was doing all of their development for the x86 platform and originally with any Linux version supported by Amazon Web Services. This included Centos/Red Hat versions. Their first development environment used Maven tooling and pom.xml scripts that targeted deployment into Docker containers. They used Github capabilities to clone and manage the source code libraries within their business.
The first major effort was for me to establish a development environment on my computer and prove that I could work with and build a workable x86 version of the code. My computer of choice was a MacBook Pro 2010 model running the latest MacOS at the time. First thing to do was turn my MacOS into a real developers machine. I installed xcode, Atom, SourceTree, Filezilla and Docker which enabled me to look like a Linux system, edit source files intelligently, manage access to the source files, facilitate cloning of source and execute the code. There were other local variant software that I needed to install using a script that was provided to me. I love the Mac, as did the vendor, who’s entire team used it, so that was really helpful. I then needed a VPN into their system and I was off and running. I used this set up for about two months. One thing I learned, painfully, that the 2010 Mac was SLOOOOOWWWW. What would take 15 minutes to do for them might take me over an hour. So I decided to upgrade to the MacBook Pro Touch Bar quad-core 16GB memory laptop. Now my work completed faster than their 15 minutes, which was a blessing. I can’t stress enough the value of a good starting point on the desktop or laptop for this type of development! It was life changing to me.
Open Source and Operating System Dependencies
The first version of the vendor code used Centos/Red Hat as the target deployment environment. This code runs over 50 Docker containers. Each container is intended to be as small, memory wise, as possible, so it is scalable in a largely virtualized environment. As mentioned earlier, they also used Maven and pom.xml scripts to do their container builds. Each container had a script that would gather necessary pre-requisite open source parts, their Java code and then do the build so there was an executable container. Naming conventions, versioning and more were part of these Maven scripts. 90% of the open source code used was available in a binary form as either an RPG, ZIP or TAR file. Those binaries were either copied into the vendor’s library system or accessed via a URL and dynamically downloaded from the internet during the build process. I’ll get into the System z ramifications of this in the Goodand Bad blog entries.
This is the development environment I began my first phase of the port. The prototype I was building was only for a functional test to prove the code could work. We intended to accomplish our test goal with only 40 of the 50 containers being ported. We completed what we thought was a good test level of code after a few weeks of my porting. But then we identified some critical test containers were missing. Unfortunately, the vendor didn’t use the same library management rigor for their test suite and I was going to have to re-base my code.
Rebasing the code and changing Development environments
Unfortunately, that was the tip of the iceberg in changes. I mentioned this was a startup vendor. They had two very large customers that were testing the code when I started. They realized they had a scaling problem, early on. They also realized they had some development inefficiencies. When you get a RedHat, SUSE or Ubuntu distribution, there is a lot of software in the package, like getting the z/OS operating system, MacOS or Windows. As such, the kernel of the large distribution Linux systems can start at 250MB and easily be over 750 MB’s. When you add 100’s of virtualized containers, each having that size as the basic footprint, the overall system runs out of memory pretty quickly. However, if the kernel can start at 18MB and run about 50MB, then greater scale is possible. As development of this application began, the Alpine Linux distribution began and it met the small size requirement. The vendor began to rebase all of their test code and as much of the open source code as they could on Alpine to take advantage of this reduced memory benefit. That was and is an excellent business decision on their part.
Maven is a fairly complex environment for building docker containers. It works. Both the vendor and I proved that it could work. However, in addition to open source code, there are now open docker containers that can be leveraged, as is, to be included in place of an open source binary. However, in order to do that with Maven, the Docker definition files of these open containers must be cut and paste and then modified as part of the Maven script syntax. And each time the container definition changes in the open source world, the Maven scripts need to be hand modified. So the vendor dropped Maven as the base for their container build environment and switched to using Docker build definitions directly. Again, I applaud the vendor for doing this. It simplified the development environment, it gave them access to additional open source code repositories and made everything easier to manage.
The unintended consequences of the vendor’s change from Maven to pure Docker and Centos/RedHat to Alpine was I had to start all over on the port. I’m going to save the details of that for the Goodand Bad statements as they are directly applicable to System z.
As far as Linux for x86 cloud environments, this vendor has a world-class development environment, working to create the most reliable, secure and efficient application possible. Ultimately, those attributes must apply to System z deployment as well. I’ll be covering that status in the other blog entries.
Read Part 2 here.
Read Part 3 here.
Read Part 4 here.
Jim Porell is a Rocket Principal Software Architect, focusing on new functions for System, Storage and Security products from IBM. His primary focus is the architecture of the OMEGAMON family of monitoring agents. Prior to joining Rocket, he was an independent consultant and retired IBM Distinguished Engineer. He held various roles as Chief Architect of IBM's mainframe software and led zSystems Business Development, as well as marketing of Security and Application Development for the mainframe. His last IBM role was Chief Business Architect for Federal Sales. Jim held a TS/SCI clearance for the US Government, was a member of the US Secret Service Electronic Crimes Taskforce in Chicago and co-authored several security books. He has done cybersecurity forensic work at a number of Retail, Financial and Government agencies and created a methodology for interviewing customers to avoid security breaches for large enterprises. Jim has over 43 years working with Information Technology.