March 1, 2020 - Blog

kdb+: Benefits of Horizontal Scaling

Banks and financial institutions have been using kdb+ services for decades to capture and analyse large, growing volumes of data because of its ability to scale horizontally and vertically. In this blog post, we take a look at some of the common factors in deciding how best to scale your kdb+ solution.

Horizontal vs vertical scaling

Vertical scaling is adding more power to an existing resource, such as additional or faster storage to a current server, or increasing the CPU pool that the resource is using.

By contrast, horizontal scaling is adding more processing units or servers to the solution to increase the number of data nodes available to the application. Think of it in terms of server stack where we add more machines across the horizontal and more resources to an individual machine on the vertical.

Eventually, vertically scalable implementations will hit a limit. On the other hand, a horizontally scalable solution enables users to add additional servers when necessary, providing greater overall flexibility. Given that the volumes of data within a typical kdb+ database are split across multiple servers anyway, most architects would aim to maximise the horizontal scalability at the design phase.

How to achieve effective horizontal scaling in kdb+ services

  1. Identify data nodes 

We need to identify the data that might constitute a logical data node. These nodes should be as self-contained as possible to allow them to be scaled independently, but they’ll interact with other nodes to provide the data universe. This might mean splitting data by region, asset class, data type, instrument or business requirement.

  1. Define the process for adding new data node or reconfiguring existing data nodes 

One of the main advantages of horizontal scaling is the ability to add extra capacity to the application with limited (if any) downtime. Therefore, we need to pay particular attention to how the data nodes are configured.

It should be easy to add new data nodes to the system (either as additional copies of the existing nodes or as new data completely), and it should also be possible to reconfigure the system to utilise the existing data nodes more efficiently. For example, we may want to move some of the nodes across servers to better use memory across the entire enterprise stack or add an additional node of the current trading day to satisfy demand.

  1. Consider process initialisation and recoverability 

When a new data node starts, it should be able to access all of the data that it’s responsible for as efficiently as possible. This could mean it has to pull data from another process or load data from a particular storage point, but it should be seamless, so there’s minimal manual intervention.

If there’s a problem within a data node and it crashes, the node should be able to restart and recover automatically, so maximum capacity is restored to the system.

  1. Implement storage for data persistence 

How data is persisted is a crucial factor when determining the options for scalability. For optimal performance, the data nodes may need to be located on the same server as the underlying storage. In other circumstances, the data might be replicated across multiple types of storage (including network storage) to allow the availability of data to scale while prioritising critical requests to the faster storage.

  1. Design load balancing functionality 

Any efficient horizontally-scaled architecture will have some kind of load balancing functionality to ensure requests get routed to the available data nodes. However, this module can be leveraged to provide additional functionality to enrich the solution architecture. It can be used to monitor the health of the system, how many nodes are available and the load on each node, to provide early warnings for capacity or other issues.

It can be used as a service mechanism to ensure that clients have a single point of access to a multi-node cluster or can be utilised to unify data from multiple nodes into a single response to an end-user. Additionally, it can include quality of service factors so users could be serviced by nodes that will optimise performance or to ensure specific requests are given priority in any execution queues. Learn more from this piece that compares Horizontal vs Vertical Scaling, and how to choose which will suit your business.

In conclusion…

While scalability within kdb+ services can appear a complicated subject, we hope this post has provided some insight into the benefits that a horizontally scalable solution can offer. Horizontal scaling won’t be the correct choice in every situation, especially latency-sensitive use-cases. Still, it can provide simplicity and ease of expansion in the right circumstances and should be considered in the architecture design stage.

News & Insights