kdb Products
Overview
KDB.AI
kdb+
kdb Insights
kdb Insights Enterprise
Capabilities
Anomaly Detection
kdb+ Time Series Database
Liquidity Management
PyKX Python Interoperability
The Data Timehouse
Vector Database Explained
Services & Support
Financial Services
Quant Research
Trading Analytics
Industry & IoT
Automotive
Energy & Utilities
Healthcare & Life Sciences
Manufacturing
Telco
Learn
Overview
KX Academy
KX University Partnerships
Connect
KX Community
Community Events
Developer Blog
Build
Download
Documentation
Support
About Us
Partner with Us
KX Partner Network
Find a Partner
Partner Signup
Join Us
Connect with Us
by Enda Gildea
As data volumes continue to increase it poses a significant challenge to ingest, process, persist, and report upon batch updates in a timely manner. One solution to this problem is to use a batch-processing model in kdb+, the requirements and considerations of which will differ from a standard tick architecture.
In the latest of our ongoing series of kdb+ technical white papers published on the KX developer’s site, Senior KX engineer Enda Gildea discusses how a system can ingest a huge amount of batch data through kdb+ efficiently and quickly.
In this whitepaper, Enda discusses what batch processing is and outlines a framework that shows how mass ingestion of data can be done quickly and efficiently through kdb+. The framework aims to optimize I/O, reduce time and memory consumption from re-sorting and maintaining on disk attributes.
Click on this link to read the whitepaper.