× Business
TelecomHealthcareDigital MarketingERPRetailMedia and EntertainmentOil and GasFood and BeveragesMarketing and AdvertisingBanking and InsuranceMetals and MiningLegalComplianceCryptocurrency
Big DataCloudIT ServiceSoftwareMobileSecurityNetworkingStorageCyber SecuritySAPData AnalysisloTBio TechQuality AssuranceEducationE-commerceGaming and VFXArtificial Intelligence
Cisco DATABASE Google IBM Juniper Microsoft M2M Oracle Red hat Saas SYMANTEC
CEO ReviewCMO ReviewCFO ReviewCompany Review
Startups Opinion Yearbook Readers Speak Contact Us

Microsoft and Facebook have announced Open Neural Network Exchange

siliconreview Microsoft and Facebook have announced Open Neural Network Exchange

Microsoft and Facebook have announced a new open source project today that’s aimed at creating a shared representation for neural networks across different programming frameworks. Called the Open Neural Network Exchange (ONNX), the new project will make it possible to share models across the Cognitive Toolkit, PyTorch, and model Caffe2.

Microsoft, which last week announced that it was partnering with Amazon to let their respective AI-powered virtual assistants talk, said this week it has joined forced with Facebook to launch an open source AI resource repository called Open Neural Network Exchange (ONNX).

The system works by tracing how a neural network generated using one of these frameworks executes at runtime and then using that information to create a generic computation graph that can be moved around. That’s possible because each of those frameworks produces a very similar end result when it comes to computation, even though the higher level representation is different.

Right now, the biggest issue with ONNX is that it isn’t compatible with some other popular machine learning frameworks, including TensorFlow, which originated at Google, and Apache MXNet, which is Amazon’s preferred machine learning framework.

"People experimenting with new models, and particularly those in research, want maximum flexibility and expressiveness in writing neural networks - ranging from dynamic neural networks to supporting gradients of gradients, while keeping a bread-and-butter ConvNet performance", developers at Facebook wrote.

"Researchers also want to iterate rapidly, which means that they need excellent tooling for interactive development and debugging. PyTorch has been built to push the limits of research frameworks, to unlock researchers from the constraints of a platform and allow them to express their ideas easier than before."