The Silicon Review
“ORBAI is jumping over the current generation of deep learning technologies and starting with a more capable and flexible architecture and tools that enable advanced artificial intelligence and AGI.”
Founded in 2018, Silicon Valley-based ORBAI is working on a new AI architecture that progresses and evolves over time. In the software platform’s early stages, company founder and CEO Brent Oster says we can expect to see much better vision and speech systems than what we’re used to seeing from present-day AI products and applications. From there, the software will learn to talk and interact, becoming a human mimic that’s able to perform real work jobs. The software’s final stage will extend to artificial general intelligence (AGI), which includes sensory capabilities, cognition, vocational skills and the ability to communicate that’s on par with humans. In the next 5-10 years, Brent and his team of developers believe AIs built on this foundation will evolve to the point where machine intelligence will be able to understand or learn almost any intellectual task that any human being can and do it better.
Artificial intelligence (AI) using deep learning (DL) has very significant limitations and does not result in AI like we see in movies and TV, holograms, robots and other interfaces that can talk to us conversationally and interact like us. If you look at the virtual assistants from Apple, Google, Amazon and others, they do a fine job of ingesting a command, translating speech to text, looking the question up on the internet and then translating text to speech with answers. But their current hardware and software specs will never enable them to transcend anything beyond what they were initially designed to do, such as speak conversationally, or learn as they interact with you.
“Today’s AI, ML and deep learning (DL) neural networks are a faint shadow of the biological neural networks in our brains and are more like fitting a statistical model to data than modelling any true cognition or learning,” said Brent.
Once a deep neural network is fitted to a dataset, it can do some very useful inference within the domain of that dataset to produce useful outputs, but it cannot go very far outside that domain or learn while deployed.
“Each deep neural network (DNN) implementation is very narrow and static. Even AIs using multiple DNNs in combination are full of gaps in capability and what they can process. They will never result in an artificial intelligence anything like us or able to work and interact with us effectively,” Brent continued.
Building more advanced AI that can learn and evolve to AGI would be impossible starting with today’s deep learning technology, so ORBAI is developing a new foundation using Spiking Neural Networks (SNNs). SNNs are artificial neural networks that more closely mimic natural neural networks, such as the human brain. This includes using feedback loops to train the sensory cortices and an artificial genome that allows evolution of the resulting neural networks with genetic algorithms to work on large SNNs for the first time. This evolution is necessary to produce anything more complex than science demos and test networks. Evolution is the process from which our brains were formed over the past billion years and is the most powerful force in nature.
“The capabilities of this AI includes being able to do tasks we would expect of a real AI, such as taking in inputs, performing real thinking and planning, combining them with internal memory and human-like cognition to produce useful outputs for a wide variety of situations. It will also be able to learn from its interactions with its environment like we do, by observation, experience and from practice,” Brent added.
People might say that setting out to build an AGI is too ambitious of a goal for an AI startup, but being an AI company and not having this other goal guarantees your obsolescence in 8 to 10 years if and when someone else does build one. Brent sums it up as: “It is really the only goal that makes sense in AI now. It is an arms race, winner takes all, and ORBAI plans to be the winner by jumping over the current generation of deep learning technologies and starting with a more capable and flexible architecture and tools that can get there.”
To date, ORBAI has filed three provisional patents including one for controlling 3D characters, one for the methods for developing advanced artificial intelligence using synthetic neural architectures and one for the apparatus and method for utilizing a parameter genome characterizing neural network connections as a building block to construct a neural network with feed forward and feedback paths.
Using the technology the company’s developed, ORBAI engineers are currently working on three real-world, AGI proof-of-concept products; one for robotics, one for healthcare organizations and one for the legal community.
For healthcare organizations, ORBAI is developing a medical AI that features a 3D generated doctor named Dr. Ada that’s able to interview patients, take vitals, make a diagnosis, read lab reports, and more. While the ability to perform at the level of an actual doctor is still about four years out, a mobile version that’s able to simply interview patients and take vitals and make diagnoses will be here early 2020.
For the legal community, the company is developing an AI application for mobile devices that will be able to assist with document preparation and court filings.
“You just talk to Justine Falcon – legal AI – on your mobile device or computer and she interviews you, asks the right questions and gets all the information needed to compose a legal document and file it for you. This will be ready in early 2020,” said Brent.
By 2021, Justine Falcon is expected to go to work for lawyers and will be able to research and encode case files for easy search and help with litigation, including tracking case files, predicting what the opposing council will do (based on ingested case files), helping plan litigation strategies and write and file any legal documents.
ORBAI is also developing an adaptive autopilot AI that could be deployed in unmanned aerial systems (UAS) and autonomous land vehicles, with networked, adaptive command AI learning to control autonomous fleets for applications in robotics, drones and self-driving cars.
What these products have in common is that they require an artificial brain that can learn to process inputs from vision, hearing and sensors, understand human language, and learn while deployed and interacting with people, spreading their knowledge to each other. They require cognition and the ability to think and make life-or-death decisions and learn to do it better each time.
“Lastly, all of ORBAI’s products will have the ability to control a body, whether it’s the body of a 3D character on your phone, the body of a robot or vehicle on land or a drone in the air,” Brent concluded.
ORBAI will also license its technology to third parties developing vehicles, robots, drones, toys, consumer electronics and homes, allowing them, as Brent suggests: “To create products that truly perceive the world around them and truly become artificially intelligent.”
ORBAI aspires to create a very bright future and has demonstrated its technology at a number of industry trade shows, including Augmented World Expo, TechCrunch Disrupt, NVIDIA’s GPU Technology Conference (GTC) and Singularity University’s Global Summit. Because this new technology is so much more powerful and flexible and can reach so many applications and markets, Brent believes it can easily enable products and services worth over a trillion dollars by 2025.
“All of ORBAI’s products will have the ability to control a body, whether it’s the body of a 3D character on your phone, the body of a robot or vehicle on land or a drone in the air.”