Part 2 of 4: The Good
I provided a simplistic overview of what I intended to port to Linux for System z in Part 1. The original application was built for x86 systems. As such, all binaries are built to run on x86 systems. The Docker containers that these applications run in are x86 binaries as well. So my job was to create the Linux for System z (aka S390X) binaries, with as little change as possible.
I also mentioned that this was a start up vendor with whom I was working. I had done some business work to show them the value of porting the application to System z, but they were neither skilled in, nor able to afford their own System z. So I gave them the challenge to let me prove to them this could be successful and they took me up on it and agreed to work with me.
Vendor Development Team
While a small development organization, they still had over 25 very proficient programmers and testers. I was extremely fortunate to have their lead developer as my mentor. He and I would meet at the same time, for an hour every day to check on progress, educate me or diagnose any problems I might have so that I could make progress for the next day. Most important is he was learning about the mainframe and intrigued by the possibility of business success as I was, so it was a great experience for both of us. I greatly appreciate the time and effort he put in to make this a success.
Linux Community Development System for z
Where do you find a mainframe? You ask the Community Development team. Eva Yan at IBM was instrumental in approving the vendor and I to get access to Docker containers on the mainframe. Cindy Lee at IBM was fantastic, with her team, to help show me where all the open source for z was available in the community and Martha McConaghy at Marist College, the host for vendor access to the LCDS was terrific in helping me to keep the system running.
Docker is a great place to work with portable code. My development environment was an x86 Docker container environment that pointed to the S390X Docker on the LCDS system as the target deployment environment. I’m not going to spend time giving you the details on the set up, but suffice to say it all works well.
Scalable Virtualization
I didn’t mention before that the vendor is on a different continent. So imagine from my laptop, a VPN to the vendor’s libraries where some code is downloaded, merged with code on my desktop, Docker on my desktop puts all the parts together, ships it securely to the Docker on the mainframe image, does the build and sends results back to me. So if this process took 10-15 minutes to do on my laptop, suffice to say, when you add up the networks and bulk distribution of code between systems and do the build, it’s going to take more time than a single system. Doing a single container build, for the first time, was never correct. My mantra, for years, has been “Next time for sure!”. I’d fix what needed fixing, get a little farther the next time, repeat the mantra and try again, until finally, I’d get a successful build. The time or performance isn’t a problem when building a single container. It’s when you build 40-50 containers at once, or as I liked to call it “The Big Bang”. Then it was hours to do the build on the mainframe, instead of an hour on x86. You’d think that was the bad, right? It was good, because a call to Eva, requesting some more memory and processors and I moved to a very competitive deployment environment. For just like my MacBook 2010, which was under configured for this scale of development, the initial Linux system I was given was an under configured virtual machine. With a simple config change, within moments of my request, and literally no down time, I was up on a larger Linux image, due to the magic and wonders of the underlying scalable z/VM server image.
Open Source Access
The LCDS virtual images came with RedHat kernel as the base, with some optional software included, but that was all. I need several dozen pieces of open source software to add to my environment to build my S390X binaries. Again, I don’t want to spend the money to buy a supported Linux distro for this Proof of Concept. I’m directed to Sine Nomine Associates, and in particular to Neale Ferguson. He could not have been a better ally in this effort. First and foremost, he pointed to libraries on their servers where I could retrieve many of the binaries that were necessary. It was such a relief to find many of the rpm’s I needed on their website. As mentioned earlier, I was a newbie to this kind of porting. He spent considerable time mentoring me on both basic Linux and System z specifics to keep me moving along. As important, Neale was on the Docker band wagon. He’d begun building docker containers with specific functionality. I was able to take several of his containers and imbed them into the containers I was building to simplify my deployment.
The Linux Community also has Github repositories of System z ready open source code. I bookmarked those pages and visited them often. I’ll point to links in a Bibliography in Part 4.
The real dilemma came when the vendor switched from Centos to Alpine as the base Linux kernel. Alpine was so new in late 2016, early 2017. While both are Linux derivatives, the syntax of packaging applications is different. As such, Docker builds for Centos are different from Alpine. Because I was doing a proof of concept, it really didn’t matter whether I used Centos or Alpine. However, the longer my porting took, the faster the vendor was converting their code to Alpine, so now, I would have to make “throw away” changes to support Centos.
Worse than that, there was only one person even trying Alpine on the mainframe and that was “some college kid” as a research project. How could I build an enterprise application on a system that one unpaid person was supporting? That person was Tuan Hoang and I am indebted to him. He was a Marist College student. I began contacting him late in 2016. While he had the kernel ported, there were very few packages for Alpine ported to S390X. He was quickly up to the task. I gave him a list of high priority packages. Each night, I’d get an update of what he completed. Each day, I’d build some more containers off his evening’s work. It got to the point that only third-party open source packages were not done by him. This really got my development effort going. But the best news of all was at the end of my project. Tuan had worked so hard to get his “prototype” of Alpine for System z going that the Alpine community accepted S390X as a primary target platform. All Alpine packages would be available on S390X, simultaneously to their deployment on other hardware architectures. It was painful, but it was wonderful at the same time.
Good people make life easier
What I found throughout this porting effort is there is a wonderful community of people dedicated to the support and value of System z. They were very accommodating and helped reduce my efforts greatly.
Read Part 1 here.
Read Part 3 here.
Jim Porell is a Rocket Principal Software Architect, focusing on new functions for System, Storage and Security products from IBM. His primary focus is the architecture of the OMEGAMON family of monitoring agents. Prior to joining Rocket, he was an independent consultant and retired IBM Distinguished Engineer. He held various roles as Chief Architect of IBM's mainframe software and led zSystems Business Development, as well as marketing of Security and Application Development for the mainframe. His last IBM role was Chief Business Architect for Federal Sales. Jim held a TS/SCI clearance for the US Government, was a member of the US Secret Service Electronic Crimes Taskforce in Chicago and co-authored several security books. He has done cybersecurity forensic work at a number of Retail, Financial and Government agencies and created a methodology for interviewing customers to avoid security breaches for large enterprises. Jim has over 43 years working with Information Technology.