The Silicon Review
Machine Learning is the core subarea of artificial intelligence. It makes computers get into a self-learning mode without explicit programming. When fed new data, these computers learn, grow, change, and develop by themselves. Because of new computing technologies, machine learning today is not like machine learning of the past. It was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks; researchers interested in artificial intelligence wanted to see if computers could learn from data. The iterative aspect of machine learning is essential because, as models are exposed to new data, they can independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. It's a science that's not new but one that has gained fresh momentum.
Graphcore is a semiconductor company that develops accelerators for AI and machine learning. It aims to make a massively parallel Intelligence Processing Unit (IPU) that holds the processor's complete machine learning model. The firm has created a new processor, the Intelligence Processing Unit (IPU), specifically designed for artificial intelligence. The IPU's unique architecture means developers can run current machine learning models orders of magnitude faster. More importantly, it lets AI researchers undertake entirely new work types, not possible using current technologies, to drive the next significant breakthroughs in general machine intelligence. The company's IPU technology will become the worldwide standard for artificial intelligence compute. The performance of Graphcore's IPU will be transformative across all industries and sectors, whether you are a medical researcher, roboticist, or building autonomous cars.
Benchmark products furnished by Graphcore
Natural Language Processing – BERT: The IPU delivers over 25% faster time-to-train with the BERT language model, training BERT-base in 36.3 hours with seven C2 IPU-Processor PCIe cards with two IPUs, in an IPU Server system. For BERT inference, more than 2 x higher throughputs at the lowest latency resulting in unprecedented speedups are used.
Computer Vision – EfficientNet: The Graphcore C2 IPU-Processor PCIe card achieves 15x higher throughput and 14x lower latency than a leading alternative processor. High performance at the lowest possible latency is key in many of the important use cases today, such as visual search engines and medical imaging.
The Dell DSS8440 Graphcore IPU Server: The Intelligence Processing Unit (IPU) has been designed from the ground up to support breakthroughs in machine intelligence. Its production-ready Poplar® software stack gives developers a robust, efficient, scalable, and high-performance solution that enables new AI innovations. Customers can tackle their most difficult AI workloads by accelerating more complex models and developing entirely new techniques.
Computer Vision – ResNext: Graphcore C2, IPU-Processor PCIe card, achieves 7x higher throughput at 24x lower latency compared to a leading alternative processor. High throughput at the lowest possible latency is key in many of the important use cases today. The IPU is designed to scale. Models are getting larger, and demand for AI compute scaling exponentially. High bandwidth IPU-Links™ allows multiple IPUs to be clustered, supporting huge models. Legacy architectures struggle on non-aligned and sparse data accesses. The IPU has been designed to efficiently support complex data access and at much higher speeds, which will be critical to run gigantic, next-generation models efficiently.
Poplar ®: The Poplar SDK is a complete software stack, co-designed from scratch with the IPU, to implement their graph toolchain in an easy to use and flexible software development environment. At a high level, Poplar is fully integrated with standard machine learning frameworks so developers can port existing models quickly, and get up and running out-of-the-box with new applications in a familiar environment. Below these frameworks sits Poplar. For developers who want full control to exploit maximum performance from the IPU, Poplar enables direct IPU programming in Python and C++.
PopVision™ Analysis Tools: The PopVision™ family of analysis tools help developers gain a deep understanding of how applications are performing and utilizing the IPU. It helps to understand your code's inner workings with a user-friendly, graphical interface.
The pre-eminent leader behind the supremacy of Graphcore
Nigel is the Co-Founder and serves as the Chief Executive Officer of Graphcore. Nigel was CEO of two VC-backed silicon companies before founding Graphcore; Picochip, which was sold to Mindspeed in 2012 and, most recently, XMOS. Graphcore was incubated for two years before being established as a separate entity in 2016. Before that, he was co-founder of Icera, a 3G cellular modem chip company, where he led Sales and Marketing and was on the Board of Directors. Icera was sold to NVIDIA in 2011 for $435M.
Before Icera, he was Vice President and General Manager at Altera Corporation. He spent 13 years, and he was responsible for establishing and building the European business unit that grew to over $400m annual revenues. He was a non-executive director at Imagination Technologies PLC until its acquisition in 2017 and is the author of three patents.