Newsletter
Magazine Store

50 Fastest Growing Companies of the Year 2023

Latent AI, Inc. – A dedicated edge MLOps platform for delivering optimized and secured models more quickly

thesiliconreview-jags-kandasamy-ceo-latent-ai-inc-23.jpg

There's a new wave of automation being enabled by the combination of machine learning and smart devices. ML-enabled devices will have a profound impact on a daily lives from smart fridges to cashierless checkout and self-driving cars. With the complexity of use cases and amount of devices increasing, we'll have to adopt new strategies to deploy those ML capabilities to the users and manage them. As more organizations are adopting ML, the need for model management and operations increased drastically and gave birth to MLOps. On the side of this is the surge in the internet of things. According to Statista, the global internet of things spending is projected to reach 1.1 trillion US dollars by 2023. In addition, the number of active IoT-connected devices is projected to reach 30.9 billion by 2025.

Latent AI, Inc. is an early stage venture spinout of SRI International, dedicated to building solutions that enable the adaptive edge to transform AI processing. Latent AI is well-funded by industry-leading investors with support from Fortune 500 clients. The Latent AI Efficient Inference Platform (LEIP™) brings AI to the edge by optimizing for compute, energy, and memory without requiring changes to existing AI/ML infrastructure and frameworks. Latent AI believes in a vibrant and sustainable future driven by the power of AI and the promise of edge computing. Its mission is to deliver on the vast potential of edge AI with solutions that are efficient, practical, and useful. Latent AI helps a variety of federal and commercial organizations gain the most from their edge AI with an automated edge MLOps pipeline that creates ultra-efficient, compressed, and secured edge models at scale while also removing all maintenance and configuration concerns.

Latent AI Efficient Inference Platform (LEIP)

There are few things that can increase an organization’s bottom line like an edge AI implementation that delivers on its potential. Imagine a tractor enabled with AI computer vision that needs to limit its speed to 20mph to be able to recognize what is a weed and what is a crop. If the inference speed for weed detection can be increased by only 10%, the actual real-world gains mean the tractor can double its speed to 40 mph while also doubling its production and reducing overall fuel use.

Edge AI is so promising because it can enable organizations to make exponential gains from incremental improvements. The problem is that most organizations are so lured by the promise of AI that they rush past how littered the landscape is with projects that have failed. According to Gartner, only 53% of projects make it to production. In fact, a full 85% of AI projects fail to meet their initial business goals. Those numbers reveal exactly how challenging edge AI implementations are. Because edge AI requires highly optimized models to work efficiently, ML Engineers have to go through a frustrating process that includes:

  • Overcoming resource constraints and making trade-offs between model accuracy, inference speed, size, and memory. Power and cost concerns only add to optimization challenges.
  • Tackling the idiosyncrasies of different hardware, compilers and development frameworks. In fact, two devices from the same family can routinely return entirely different results when using the same optimization algorithms on the same model.
  • Diagnosing and resolving issues caused by a continuously changing data and model landscape as real-world feedback is received.

Latent AI created the Latent AI Efficient Inference Platform (LEIP) to empower AI developers with adaptive, on-premise software tools that can compile, compress, and deploy AI models for any hardware target, framework, or OS. While LEIP provides ML engineers with everything they need for success, the process still requires configuration to get the best results. Quantization algorithms (symmetric vs asymmetric vs. tensor-based vs. channel-based, etc.), data layouts (channel-first-NCHW vs. channel-last-NHWC), and compiler optimizations (graph partitioning, operator fusion, math kernel selection) all have to be configured while additional techniques like Tensor Splitting and Bias Correction often have to be added. They created Latent AI LEIP Recipes to take the stress out of Edge AI deployments. Latent AI LEIP Recipes are a set of pre-configured assets combined with a set of instructions to follow to get to an optimized model. Each LEIP Recipe tackles a type of problem like object detection or classification and is configured for a specific model and hardware target. It comes with all the model optimization settings pre-configured including quantization, structured pruning and throttling, and lets ML engineers leverage models pre-optimized for low-power, inference speed, size and memory.

It gives everybody from developers to IT the same tool to use to move neural networks from development to device simply, reliably, and securely with a dedicated and principled edge MLOps workflow. It is a Software Development Kit (SDK) that can optimize and secure neural network runtimes for specific hardware targets. LEIP creates ultra-efficient and portable models optimized for compute, energy, and memory without requiring any changes to existing AI/ML infrastructure and frameworks. Customizable templates called Recipes are pre-qualified for a wide range of hardware targets and enable rapid prototyping of new models. LEIP compresses conventional AI models below 8-bit integer precision but without losing result accuracy, creating up to a 10x reduction in size and a 3x improvement in inference. With Latent AI’s Adaptive AI technology, the model can self-regulate its computational needs, only firing the parts of the neural network necessary to get the job done.

With LEIP, ML engineers develop optimized neural network runtimes for heterogeneous low power hardware (CPU/GPU/DSP). Their Latent-AI-Runtime-Engine (LRE) offers a modular micro-service software stack for secured inference processing, with support for CI/CD and asset tracking across the entire model life-cycle. Latent AI takes the hard work out of design and deployment by exploring thousands of candidate recipes that are pre-configured for your hardware. With hundreds of possible target models and hardware already in place, LEIP Recipes makes it easy to find the best optimized configuration that meets your design requirements.

Jags Kandasamy is a Co-Founder and the CEO of Latent AI, Inc.

"Our mission is to enable the vast potential of AI that is efficient, practical and useful. We reduce the time to market with a Robust, Repeatable, and Reproducible workflow for edge AI."

NOMINATE YOUR COMPANY NOW AND GET 10% OFF