We will be in contact shortly.
Kdb Insights
Kdb Insights Overview
Built for The Modern Data Ecosystem
Unified for Richer Analytics
Faster Return on Investment
PyKX Python Interoperability
Services & Support
Financial Services Use Cases
FX Solutions
Quant Research
Surveillance
Trading Analytics
Industry & IoT Use Cases
Automotive
Energy & Utilities
Manufacturing
Telco
Services & Support
Learn
Overview
Featured Courses
KX Academy
Connect
KX Community
Community Events
Developer Blog
Build
Download
Documentation
Support
About Us
Partner with Us
Become a Partner
Find a Partner
Partner Signup
Join Us
Connect with Us
by Jack Stapleton
The elasticity of cloud computing means that systems can be quickly scaled up, and down, to meet fluctuating processing demands without having to directly provision new hardware and other resources. Auto scaling technologies make the process even more flexible by enabling servers, storage, and networking resources to be commissioned and decommissioned in an instant, without manual intervention.
Auto scaling is achieved by monitoring the load on a system and dynamically acquiring or shutting down resources to match the varying load. Its goal is to achieve cost-effectiveness where, rather than deploying a fixed configuration calibrated to topmost expected load at all times, the appropriate resources are marshalled automatically, and only for the duration needed. In the cloud’s pay-as-you-consume paradigm it ensures that costs for idle machines at weekends are not the same as those for busy machines at peak trading times.
This paper outlines how auto scaling can be applied to kdb+ in a real-time database cluster and includes a simulation showing how savings of up to 50% could be made by suitably scaling resources.
To read the paper in full please click on this link.