Transitioning FSI Big Data Workloads to the Cloud

12 February 2021 |
Notice: Undefined variable: post in /var/www/vhosts/kx.com/devweb.kx.com/wp-content/themes/eprefix-bootstrap/functions.php on line 18

Notice: Trying to get property 'ID' of non-object in /var/www/vhosts/kx.com/devweb.kx.com/wp-content/themes/eprefix-bootstrap/functions.php on line 18
4 minutes

KX recently took part in two panel discussions at the “Transform Finance: Big Tech Event” held on 3rd and 4th February, both focussing on the adoption of the cloud to leverage big data analytics.

In the first, Kathy Schneider, CMO KX, hosted a discussion on the topic of “Transitioning FSI Big Data Workloads to the Cloud”. Participants on the panel included Brian Stephenson, Co-Founder of Pegasus Enterprise Solutions, Jamil Ahmed, Director at Solace, Mark Etherington, CTO at Crux Informatics as well as Dan Seal, Chief Product Officer, KX. Their varied backgrounds, from infrastructure and middleware to data flow and analytics, provided some very interesting insights into the current status of cloud migration including the challenges it poses, the opportunities it provides and the developments we should expect to see across them.

All panelists agreed the moving to the cloud is a high priority for many of the major banking institutions and there is now widespread recognition throughout enterprises of the advantages it offers: from finance being motivated by cost reduction, to technology seeing the opportunities for more flexible development and operations welcoming the easier rollout and maintenance that standardization on the new technology stack brings. Driving them all, of course, is the business desire to derive insights, at both scale and speed, in the new fast-driven data world. 

But while the motivation to migrate is there, so too are a number of challenges in realizing it. Some were outlined.

  • It requires a new software architecture approach and skills-set to ensure migration does not restrict itself simply to a “lift and shift” that realizes only part of the benefits.
  • Banks have accumulated large infrastructures and ecosystems around their operations which makes rearchitecting them complex. This can be particularly true in areas like pre-trade workflow, for example.
  • Performance is a key consideration too, and there are particular use cases e.g. HFT that are unlikely to transition to the cloud anytime soon. However, it’s entirely possible to transition analytics workloads and data science activities involved in supporting these use cases and then running in a hybrid fashion between cloud and co-lo locations.
  • Other barriers may include regulatory restrictions on where data can reside or simply a short-term, time and resources issue.

The upshot is that we will live in a hybrid/multi-cloud world of public providers, private data centers and on-premises computing. We should not be surprised by this. It has happened in other industries where, for a variety of reasons including performance and regulation, there has been a migration over and back of data and processing. Finance will follow the same trend.  

This led to a discussion on the broader appetite for consuming technology in its various “as a Service” guises. The consensus was that the move to data centers accelerated the openness to third-party management of critical, but non-core, activities in the bank. It began with storage and infrastructure like middleware, and is now extending to software – not only to the level of platforms like databases, but further to the application level where the use of microservices transforms the software stack and significantly improves the SDLC process for creating new functionality. 

But how can vendor lock-in be avoided in these situations, not only in software but in data consumption? The consensus that the solution lies in the adoption of open-standards led to some interesting insights as even that has its challenges. While a company strategy may unambiguously endorse open standards, it may be compromised by short-term, department-level, tactical decisions that undermine it. So governance is crucial. 

At a broader level it was suggested that, ironically, it may not be the absence of standards that is the issue, but rather the abundance of them. The industry is replete with competing standards, and data and software vendors similarly have proprietary data models for representing and consuming their functionality. Their ultimate reconciliation depends on the industry coming together to form “a coalition of the good” but in the meantime, we need capabilities to enable migration across the various providers and standards. 

The consensus at the end of the discussion was that the cloud is heralding a new world in terms of software, infrastructure and data.

  • Microservices will enable the retirement of older monolithic systems to be replaced by new agile capabilities that are easier to develop and maintain
  • Data and technology will decouple so that they can change unilaterally and progress independently 
  • Compute will be able to relocate on a dynamic basis as circumstances demand 

The impression left was that we are on the brink of significant, but positive, change. Click on this link to hear the full recording of the panel discussion  

To read about the other panel discussion on “How to utilise and leverage big data and analytics while adopting a new technology” please click on this link 

Demo kdb, the fastest time-series data analytics engine in the cloud








    For information on how we collect and use your data, please see our privacy notice. By clicking “Download Now” you understand and accept the terms of the License Agreement and the Acceptable Use Policy.