(Legacy systems – their role – their future.)

I used to run a company which specialized in providing so-called Direct Market Access (DMA) to suitably authorized Buy-Side firms – as well as to many firms on the Sell-Side.  Many of these firms (particularly the High Frequency Trading variety) would each generate transactions at a rate of some 30,000 messages per second.

If that sounds like a lot – remember that in fact these transactions were generated as a result of processing even more data prior to their inception (e.g. Market Data from various Exchanges, Exchange Rates, Social Media platforms, Financial and Political news channels etc.).

Obviously, the systems and safeguards necessary to prevent potential regulatory issues, breaches of Risk Management parameters and/or loss of connectivity had to be almost religiously enforced and ‘always on’ – tested, tested again, refined and improved upon time and again.  The various system architectures we and our Customers utilized tended to evolve over the years.

Some were using so-called “State of the Art” cutting-edge technology (think – the latest/fastest/multi-core distributed server approach) whereas others were reliant on older – yet proven workhorses such as a mainframe.  So which camp won this particular battle for supremacy?  The answer is – that it depends upon your approach to accounting for costs.

For example – some of my customers in the “new-tech” space seemed not to have lives outside of their offices. They would be there practically each and every day (including weekends) desperately trying to tweak higher throughput, better resiliency and ensuring business continuity procedures would kick-in the instant they were ever needed.

Others (mostly using mainframes) were much more sanguine – and more pragmatic about their approach to getting the best performance out of their particular system configurations.

While the “new-tech” oriented firms had to wait for various stock exchanges to schedule specific days and times to enable firms to perform system testing via an exchange’s production system – firms adopting a mainframe approach could effectively load/monitor their latest versions of software during regular production times – without detriment to their throughput requirements.

In effect – the mainframe camp benefited from what I called a protean architecture – one that is variable – capable of taking many forms – or more precisely, one that is versatile.  The throughput capabilities of mainframes to this day far outweigh those proffered by x86 distributed platforms.

Further – their abilities to perform multiple complex analyses; for example, an IBM Z mainframe supports something called single instruction multiple data (SIMD), which lets one microinstruction operate at the same time on multiple data items.  When you start doing advanced modeling and analytics, you can get an 80% performance improvement in IBM Z because of their SIMD capabilities.

So – whether you’re a Quant Fund primarily using algorithms – or you’re trying to gain an edge by using machine learning and/or artificial intelligence approaches – then the proven speed, stability and reliability of mainframes should at least pique your interest.

To enable today’s programmers to write/use the latest technology – IBM has created a host of API’s.  Thus, if the latest and greatest mobile apps will assist you in amassing and analyzing important new data types like social media comments– then why not explore how you might integrate the feeds via an API and then let the mainframe do the grunt work – all the while ensuring that the data is suitably encrypted, secure – but always available (mainframes have consistently demonstrated that they suffer very little – if any – downtime).

If you already have a mainframe – think also about how much Intellectual Property of yours is invested therein; it clearly has value.  You know that the machine performs the necessary tasks day in and day out – so why would you want to risk upsetting this ideal by (unnecessarily) migrating to a distributed type platform?

The mainframe is not in itself perfect – relying sometimes for example on the need to know the COBOL programming language.  There are of course people who are very competent with this language – but they tend to be much older than the new recruits who have been trained/brought up on more modern languages.  Consequently, Linux on the mainframe is an approach that may be of interest – indeed, IBM has facilitated a very large and highly regarded series of focused User Groups via its LinuxONE vehicle.

The main point I want to stress is that mainframes are still an essential component in today’s technology infrastructure.  In the financial sector for example – over 90% of the world’s largest banks have chosen to rely on the mainframes proven and undoubted strengths and resilience.

Further – there are now a plethora of performance and optimization solutions available which allow firms to truly fine-tune their systems to ensure they are getting the biggest and most efficient “bang for their buck” and optimizing various routines to ensure that they do not have to prematurely pay for unnecessary expansion or new/replacement hardware.

In order to protect your IP investment over the years – to ensure you’re (almost) looking at having zero percent downtime – yet are able to seamlessly integrate with whatever the next “hottest thing” might be – be sure to resist the siren calls of the marketing teams – and get the best out of what you already have.

If the backbone of your particular modus operandi is a mainframe – you can rest easier than folks in the x86 camp – secure in the knowledge that your infrastructure provides the best throughput – and reliability – and also takes excellent care of your cybersecurity concerns.  Who knows – with such peace of mind – you might even get to enjoy some personal and family time too!

If it isn’t broken, why fix it?

Worked continuously in the Financial Services Industry (primarily on the IT side) for over thirty years.
During this time has worked first-hand on major Industry Initiatives both in the U.K. and in the USA – such as TALISMAN, TAURUS, CREST, (the Bank of England’s) CGO, Counterparty/Client/Settlement Risk Reporting, CHAPS, Model A and B type Clearing, Intra-Day Payment Netting, Capital Gains Tax Reporting, Regulatory Reporting, Trading Interfaces (from DOT through to FIX API’s and beyond), Multi-Instrument and Multi-Currency systems, Direct Market Access and Custodian Services.
In short, I have been pretty much continuously involved with various types of FinTech for the longest time.

Leave a Reply

Your email address will not be published. Required fields are marked *