Newsletter
Magazine Store

May Edition 2022

Lightelligence – Building Optical Chips That Empower the Next Generation of High-Performance Computing Tasks

thesiliconreview-dr-yichen-shen-founder-lightelligence-22.jpg

Today, quantum computing holds out hope for a new technological leap, but there is another option on which many are pinning their hopes: optical computing, which replaces electronics (electrons) with light (photons). The end of Moore’s law is a natural consequence of physics: to pack more transistors into the same space they have to be shrunk down, which increases their speed while simultaneously reducing their energy consumption. The miniaturization of silicon transistors has succeeded in breaking the 7-nanometre barrier, which used to be considered the limit, but this reduction cannot continue indefinitely. And although more powerful systems can always be obtained by increasing the number of transistors, in doing so the processing speed will decrease and the heat of the chips will rise. Hence the promise of optical computing: photons move at the speed of light, faster than electrons in a wire. Optical technology is also not a newcomer to our lives: the vast global traffic on the information highways today travels on fibre optic channels.

Lightelligence is one such company which is developing a new computing paradigm, one that leverages the speed, power, and efficiency of light to power the next generation of innovations. Driven by an open and transparent culture of disruptive thinkers and imaginative doers, Lightelligence is leading the way in developing workable photonic solutions that will deliver orders of magnitude more speed in computing, and positive impact on life. With a focus on real-world solutions and a collaborative, global perspective, Lightelligence is transforming cutting edge thinking into world-changing breakthroughs. Thanks to a team of more than 150 technical experts across the globe, Lightelligence remains the world’s only company to demonstrate fully integrated optical computing systems working at speed.

Lightelligence: Solving Optical Computing Problems at the Speed of Light

Integrated Photonics: Artificial neural networks have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks.

Gated Orthogonal Recurrent Units: On Learning to Forget: The company has a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory. They achieve this by extending unitary RNNs with a gating mechanism. Their model is able to outperform LSTMs, GRUs and Unitary RNNs on several long-term dependency benchmark tasks. They empirically both show the orthogonal/unitary RNNs lack the ability to forget and also the ability of GORU to simultaneously remember long term dependencies while forgetting irrelevant information. This plays an important role in recurrent neural networks. They provide competitive results along with an analysis of the model on many natural sequential tasks including the bAbI Question Answering, TIMIT speech spectrum prediction, Penn TreeBank, and synthetic tasks that involve long-term dependencies such as algorithmic, parenthesis, denoising and copying tasks.

Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs: The company uses unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data. This approach appears particularly promising for Recurrent Neural Networks (RNNs).

Deep Learning Algorithm: Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data. This approach appears particularly promising for Recurrent Neural Networks (RNNs). In this work, we present a new architecture for implementing an Efficient Unitary Neural Network (EUNNs).

Dr. Yichen Shen| Founder and CEO

Yichen received his PhD degree in Physics in 2016 from MIT, where his research focused on nanophotonics and artificial intelligence. Yichen has published 40 peer-reviewed journal papers and has filed 20 US patents, including first authored papers in Science, Nature Photonics, and ICML. Yichen has been honored as a Forbes 30 Under 30 and MIT Technology Review 35 Innovators Under 35.

“By processing information with light, our chips offer ultra-high speed, low latency, and low power consumption representing orders of magnitude improvement over traditional electronic architectures.”

NOMINATE YOUR COMPANY NOW AND GET 10% OFF