Solving modern data problems - The age of GPU acceleration

Solving modern data problems - The age of GPU acceleration
The Siliconreview
18 November, 2020

The increased adoption of GPUs in the data analytics domain is a direct response to key data challenges that are currently emerging across data-driven industries; including reconciling responsiveness with flexibility, and data scale with detail.

GPUs are designed to solve more than just today’s challenges – their performance capabilities will support the inevitable evolution of analytics into the next decade and beyond, allowing businesses to innovate with future-proofed sustainability.

Legacy databases are restricting data analytics capabilities

Digital transformation may have emerged in the 90s, but it remains a top trend for innovative businesses across the globe. While the majority of organisations have already replaced many manual processes for their technological counterparts, the concept of continuous digital and procedural advancement is still a prevalent, and necessary, mindset for leading players.

However, when it comes to data analytics, implementing more sophisticated techniques, improving capabilities and onboarding advanced technology is a challenge for even the most agile organisations.

Primarily, this is due to three realities:

  1. The rapid adoption of digital transformation itself – i.e. the rise of data-intensive movements such as connectivity, IoT, and globalisation – there has been a massive increase in the volume and speed of data being produced.
  2. The commoditisation of data has become an independent revenue stream, where more organisations want to either sell data, or use it to gain customer and market insight. Therefore, businesses are collecting more data than ever before.
  3. Most companies heavily invested in data technologies years, or even decades, before this market evolution began. Many legacy CPU systems weren’t designed to support the more sophisticated functionality that new analytics solutions are offering, such as geospatial mapping, AI workloads, in-context data streaming, and VR/AR visualisation.

These situations can have tangible and restrictive impacts on the overall performance of data analytics solutions, ultimately forcing businesses to make compromises between responsiveness, ad hoc analysis, and data volume.

In dynamic markets, organisations need to process their data and gain insights as fast as possible. In order to do this in a timely way, organisations have to either pre-aggregate their large datasets, limit the scope of their queries and/or reduce the volume of their datasets.

Organisations also want to perform ad hoc analysis to help them answer new questions as the market changes, however, this isn’t possible with pre-processed datasets. The alternative has no great advantage - to spend months building new queries into pre-defined datasets.

What this means is that – despite the advancement of the analytics market – organisations are being held back from implementing digital transformation in this area by their underlying legacy systems.

The rise of GPU accelerated analytics solutions

Originally used for fast graphics rendering in gaming applications, GPUs have now been making headway in the analytics domain. Since the early 2010s, GPU databases have been exploited to address the challenge of CPU systems overloaded by intense data workloads.

Deployed as clusters, GPU-based databases are optimised to process multi-billion row datasets in parallel. They can process large and evolving datasets, uncovering extreme detail in seconds. This transformation in speed, in comparison to CPUs, also unlocks agile, speed-of-thought analysis without a need for pre-processing.

GPU databases have gained a superior reputation in the data market with leading technology players, such as Amazon, IBM, and Dell, adopting GPU solutions to gain increased insights. Undoubtedly, the demand for GPUs will keep growing – supported by the standardisation of massive data workloads, the demand for deep learning and parallel technology, and the expansion of analytics into new and different markets.

What Brytlyt do differently

The Brytlyt accelerated analytics platform is underpinned by advanced GPU technology for ‘speed of thought’ analytics performance. Consisting of a database built on PostgreSQL, a visualisation workbench and embedded AI capabilities, the Brytlyt platform helps data-driven businesses overcome data compromises. Brytlyt can deliver insights from vast and streaming datasets, in the required context within milliseconds, and without any pre-aggregation or pre-processing of data.

Our aim is to not only help businesses harness their rapidly growing datasets and overcome the prevalent data challenges they’re facing, but also to help future-proof how businesses build analytics infrastructures so they can rise to new challenges with conviction.

By building a scalable, adaptable, and easily integrated platform – we ensure that businesses can confidently deliver the data intensive workloads they need to solve tomorrow’s problems, as well as today’s, without having to replace existing systems.


Brytlyt has the most efficient platform for performing parallel processing for JOINs on GPU, allowing businesses to fully exploit the power of relational database concepts and bring context for truly meaningful, speed of thought analytics.


Brytlyt is built on the open-source database PostgreSQL, a platform used by organisations across the world, including Amazon Redshift, CitusDB, IBM, TIBCO, and Tableau. This allows Brytlyt to easily connect with any product that has a native PostgreSQL connector or integrate with Python and other similar packages through the Brytlyt platform.

Benefits of PostgreSQL:

  • Brytlyt integrates seamlessly with common analytics or visualisation tools to accelerate processing and unlock sophisticated analytics capabilities.
  • Brytlyt can be up and running in hours, allowing users to gain insights within seconds of implementation.
  • Users can continue to use the interface of their current solution, so there’s no need for retraining or onboarding.
  • Users will always have access to their solution’s most up-to-date capabilities and features as we remain concurrent with the latest version of PostgreSQL.

Embedded AI

Brytlyt has AI functionality fully embedded into our database solution so users can run AI workloads with PostgreSQL supported database workloads simultaneously using PyTorch.

Contact us to see how your organisation can transform with Brytlyt, or read our insights.