30 Smartest Companies of the Year 2024
A Leader in designing cutting-edge processors and smart memories, powering AI, ML, and HPC innovations worldwide: Abacus Semiconductor Corporation
The Silicon Review
![]()
Abacus Semiconductor Corporation was founded by Axel Kloth, a visionary in the field of high-performance computing. With a background deeply rooted in the use of supercomputers, Kloth had long been dissatisfied with the industry’s performance promises. His journey into the world of computing had revealed a consistent gap between expected and actual performance, especially when it came to leveraging CPUs for complex computations. This gap was largely due to the necessity of programming close to the hardware, requiring expertise in languages like C, C++, or even assembly, which was not always feasible or efficient for all users. The industry’s shift to GPGPU-accelerated computing brought some improvements, but it introduced new challenges. General-Purpose Graphics Processing Units (GPGPUs) offered a boost in performance but required a paradigm shift. The workloads needed to be orchestrated by CPUs and distributed through Application Programming Interfaces (APIs) like CUDA. While this made computing more accessible, it also created dependencies on specific vendors and limited the use of potentially superior accelerators.
Kloth recognized the limitations inherent in this approach, including the lack of linear scale-out capabilities. The disparity in computational performance became more pronounced as the number of GPGPUs increased, which was particularly problematic in large data centers housing thousands of servers. This unsustainable trajectory prompted Kloth to rethink the architecture of processors and accelerators. Driven by the need for innovation, Kloth led Abacus Semiconductor Corporation back to the drawing board to develop a groundbreaking architecture. This new patent-pending technology, known as heterogeneous accelerated compute, aims to deliver a more linear performance scale-out. By utilizing a standardized interconnect for processors, accelerators, and smart multi-homed memories, this architecture promises low-latency communication and high bandwidth, potentially revolutionizing the field of high-performance computing.
![]()
In conversation with Axel Kloth, Founder, President, and CEO of Abacus Semiconductor Corporation
Q. Can you explain about your products in brief?
We are excited to share that we’ve made significant progress in the design of our Server-on-a-Chip, our smart multi-homed memory, and our math accelerator. The Server-on-a-Chip is a revolutionary development in that it essentially integrates nearly all the components required to build a server onto a single chip. It includes application processor cores, network offload processor cores and accelerators, and mass storage offload cores, as well as a variety of hardware accelerators for storage operations. Additionally, it features several DRAM controllers, which are ideal for customers who prioritize cost efficiency and prefer to use standard DDR5 DRAM DIMMs. Our scale-out interface allows seamless connectivity with other Server-on-a-Chip units, smart multi-homed memories, or accelerators. This innovative chip is capable of supporting high-performance, cost-effective servers optimized for Linux, Apache, MySQL, and PHP/Perl applications, which account for approximately 80% of all internet data traffic. Furthermore, it can be utilized to build network filtering in firewalls, mass storage appliances, and proxy servers, delivering high-performance solutions with minimal software requirements. It also serves as an advanced I/O frontend for computationally intensive applications where both I/O and data processing are critical to meeting computational demands.
In conjunction with our math accelerator and smart multi-homed memory, it is possible to construct an AI supercomputer with a more linear performance scale-out than traditional solutions. Our smart multi-homed memory is an intelligent memory subsystem on a chip that mimics the functionality of a large memory without the performance-draining refresh requirements of current DRAM DIMMs. It allows for data sharing across processors, cores, and accelerators, maintaining cache coherency across all connected units, both locally and in conjunction with shared units. Moreover, it includes smart features that ensure data integrity and correctness, supporting specific operating system kernel functions for memory management. This functionality simplifies tasks for OS kernel programmers by streamlining the process of maintaining accurate and updated memory lists for the operating system and applications. Finally, our math accelerator is specifically designed to enhance large-scale mathematical operations, including vector, matrix, tensor math, and various mathematical transforms developed over time by mathematicians and physicists.
Q. What role does open source play in your development strategy, especially in the context of frameworks like openCL and openACC for your Math Processor?
We believe that the use of open source is essential. The more open-source applications and APIs exist, the easier it is for programmers to do their work. They are more efficient and effective, and they produce fewer bugs. It also democratizes access to functions that otherwise would be hard to implement. No one wants to code yet another matrix multiplication implementation on processor XYZ. The same is true for a Fourier or Laplace transform – it is hard enough to code, but even more difficult to verify that it is mathematically as correct as the data format supports. As a result, making those functions available will simplify access for everyone. We admit that CUDA has made all of these math functions available, but only to CUDA and therefore NVIDIA users. That is great for NVIDIA as that locks programmers and users in, but it is not great for those that do not have access to NVIDIA products.
Q. With offices in both Silicon Valley and Germany, how do you leverage the strengths of these locations to drive innovation and meet the needs of your global customers?
We are certain that the largest markets for infrastructure AI and for High Performance Compute are in the US and in Europe, and I do not think that this is going to change any time soon. There is also the question of talent. Certain talent is very hard to come by in Silicon Valley these days – it either does not exist anymore, or is very expensive. In Europe, we see pockets of talent in those areas that we need, at very reasonable levels of cost to us. That also shortens the feedback look with customers from Europe. If they need a certain feature, they can directly contact our German office, and they can translate that directly into product features, whether in hardware or in firmware.
Q. Can you provide examples of specific applications or industries that have benefited from your high-density memory subsystems and processors?
As the term may imply, we see lots of demand in AI, particularly in Generative AI in the backend, and there very specifically in training using Large Language Models, or LLMs. Another huge trend is what industry experts call digital twins. A digital twin is a representation of a system inside a computer, taking all of the interactions within that system and between the system and the outside into consideration. We have seen requirements in a Petabyte range for relatively small digital twins that have a very large solution space and that interact via very many channels with the outside.
Q. What does the future hold for your company and its customers? Are exciting things on the way?
We are talking to prospective customers and have a few new tricks up our sleeves…
Meet the leader behind the success of Abacus Semiconductor Corporation
Axel Kloth is the Founder, President, and CEO of Abacus Semiconductor Corporation, bringing his expertise as a physicist and computer scientist to the helm. A serial entrepreneur, Axel has founded multiple companies, including an AI-enhanced security processor company and SSRLabs, focusing on HPC. He holds about 40 patents, including pioneering advancements in image processing and convolutional neural network processors.
As a Venture Partner at Pegasus Tech Ventures, Axel conducts technology due diligence for Pegasus Tech Ventures. He will assess the technical feasibility of a startup and its products if it is in the deal pipeline at Pegasus Tech Ventures.