× Business
TelecomHealthcareDigital MarketingERPRetail
Big DataCloudIT ServiceSoftwareMobileSecurityNetworkingStorageCyber SecuritySAPData AnalysisOracleloT
CEO ReviewCMO ReviewCFO ReviewCompany Review
Startups Opinion Yearbook Readers Speak

Google launches custom networking CPU with parallel computing links

siliconreview Google launches custom networking CPU with parallel computing links

It appears Google has quietly built an in-house processor with close ties to parallel computing and networking. Evidence of the CPU, destined for internal use only, emerged today in source code patches for the LLVM C/C++ compiler, allowing programmers to produce executables for the hardware. Not that you can get your hands on any. Getting the patches accepted into LLVM, though, will make life much easier for Google staff, as it will ease the process of keeping up to date with the main toolchain code. Looking at the specs, the processor core, dubbed “Lanai”, is relatively simple , it’s more like a well-equipped microcontroller and unlikely to run compute workloads. However, it could be a building block in a massive parallel computer.

Lanai is described as a simple in-order 32-bit processor with 32 32-bit registers including: two fixed value registers (one probably being zero); four state registers including the program counter, stack pointer, and frame pointer; and two registers reserved for threading support. There is no floating-point hardware, so it won’t be juggling tasks involving lots of math. Google software engineer Jacques Pienaar said the blueprints for Lanai were derived from the textbook Parallel Computer Architecture: A Hardware / Software Approach [PDF] which describes how to build machines that process huge amounts of data efficiently and simultaneously in parallel. We’ve heard that Google is using to some degree customized Nvidia chips for its machine-learning systems. The web giant is also toying with ARM and POWER architectures in its data centers, and poking around RISC-V, too. We’ve known for some time, therefore, that Google is exploring the world of chip design; it’s eyebrow-raising to spot such efforts in public.

“This is internal hardware for us, so there’s not a lot [of information] we can share, and you can’t really grab a version of the hardware,” said Googler Chandler Carruth. “We’re working on the backend a bunch, and it didn’t make sense to keep it walled off. Especially if there is anything that can be reused in other backends and/or if there is any common infrastructure we need, this makes it easy to test.”

Although the source code updates make no mention of a vendor, the Googlers are using Myricom’sLANai linker, suggesting the Lanai we’ve glimpsed today is a custom spin of Myri’s high-end network controllers of the same name. In 2013, Myricom’s assets were bought by Massachusetts-based CSPi, which builds hardware for hyper-scale cloud providers, and hyper-converged compute and storage hardware for data centers. Google’s Lanai could well be a heavily customized programmable network controller based on Myricom’s designs. Its purpose would be to build intelligence into the fabric of the internet giant’s data centers, perhaps to weave a complex software-defined network for its server warehouses.