Skip to content

Interview with James Lupton, CTO at Blackcore Technologies: The Role of Overclocked Servers in Financial Services and Beyond

From human traders to AI, the financial services industry has been at the cusp of technological innovation. What makes this industry so unique is there is always something new being tested or deployed, which is why overclocked servers are the next big competitive edge in the lightning-fast world of high-frequency trading (HFT).

We spoke to James Lupton, CTO at Blackcore Technologies, a leading manufacturer of high-tech servers, to discuss how financial services are deploying overclocked servers, and how this may cause a ripple effect among other key industries.

How are financial services deploying overclocked servers, and how might this cause a ripple effect among other key industries?

Overclocking is the practice of taking hardware components and pushing them beyond their standard operating speeds. For example, running the CPU (Central Processing Unit) at a higher clock rate, or tuning memory parameters for lowest latency rather than max bandwidth. This process is quite complex and if performed incorrectly can lead to symptoms such as overheating, server instability or hardware damage.  The key goal of overclocking is to run more instructions in a smaller time period, which can translate directly to latency reductions in the financial services sector and the genesis of the process in gaming but is something we’re starting to see adoption of in other industries for large compute workloads, firmware generation or complex software build processes.

How have human traders traditionally participated in HFT and how is cutting-edge technology, like overclocked servers, allowing financial firms achieve to lower latency and a positive ROI for their customers?

Within Financial services not all trading strategies are latency-critical, but most are latency-sensitive. From the days of trading pits, traders would edge closer physically to the price source simply to be able to react quicker to trades. Nowadays, low latency is achieved with co-location, low-latency wireless networking between trading venues and high-performance networking, FPGA, and computer technology.  In most cases, a combination of one or more of these things can bring a competitive edge to a trading firm. Since the origin of electronic communication networks (ECNs) almost 30 years ago and the establishment of matching processes in computers rather than with human interaction there has been a race to as close to zero latency as possible.  Being first to be able to uncover a piece of information allows a trading firm to apply its logic first and “win” that trade.  Nowadays some very rudimentary algorithms, with the best of technology are known to respond in double-digit nanoseconds, which gives an idea of just how competitive the race can be.

How can latency impact financial institutions and their customers when it comes to electronic trading?

This process of overclocking, amongst other things, can lead to a reduction in latency, allowing trading firms to react quicker to market events and have a higher chance of a successful trade outcome.  The key goal of most electronic trading firms is to optimize latency across the complete stack, or throughout the trading process. There may be a significant benefit of running software on overclocked servers but if your server is in a data center miles away from the stock exchange location, another competitive firm may be able to respond quicker than you.  So, while latency is key, it cannot be considered in isolation.  Physical proximity, networking technology, hardware (or firmware) design and performance and trading logic can all have an impact on the ability to be first to respond to a market event.

What are your predictions on future financial services technology and overall adoption of overclocked servers?

The race to zero is by no means over, the firms that are trading in “handful of nanoseconds” latencies, which typically use FPGAs or custom-built ASICs, have little room to squeeze out further latency improvements. This is beginning a wave of adoption of more complex algorithms that have historically not been suitable for the quick (and simple) nature of hardware-based algorithms.  Hybrid algorithms are becoming more mainstream with partial FPGA-based execution guided by more complex software-based data processing.  Hardware-based technologies such as FPGA and ASIC have been around for 20 years at this point, and much longer outside of the finance world, but there will always be a place for reliable and consistent ways to improve the software processing components of any trading application. The days of overclocking servers for latency gain being a niche and requiring special resources are gone – it’s now mainstream and getting much wider deployments for a larger set of use cases.  I fully expect that folks will continue to be overclocking in the trading world 10 or 20 years down the road.


Latest