Inference Engine

FWDNXT Inference Engine provides the highest utilization
of any machine-learning and deep neural network processors

Direct deployment
from your framework
to your application

Our software takes trained neural network files from PyTorch, Caffe, TensorFlow,
and compiles directly them into our accelerator, with no need for any programming


From IoT to mobile, automotive, servers and all the way to data centers

Inference Engine FWDNXT SDK Order SDK Order Inference Engine

FWDNXT: complete deep learning solutions

FWDNXT Inference Engine Product lineup:

Inference Engine

is scalable from IoT and edge devices all the way to high-performance workstations and servers.

Optimized Compiler

FWDNXT Inference Engine compiler can run any neural network model. See our SDK brief. A recent paper on our compiler is available here.

Contact us!

FWDNXT Inference Engine and its software is available in FPGA devices, as an IP, or as an SoC. Contact us for pricing!

Core team

These are the faces behind FWDNXT magic:

Card image

Abhishek Chaurasia

Lead Machine Intelligence
Card image

Marko Vitez

Software Engineer
Card image

Andre' Chang

Lead Compiler & Founder
Card image

Aliasger Zaidy

Lead Architect & Founder
Card image

Jim D. Johnston

General counsel & Financial advisor
Card image

Eugenio Culurciello

Team leader & Founder
Card image

Milind Kulkarni

Advisor: compilers


Our mission is to propel machine intelligence to the next level.

If you want your devices to be smarter, talk to us!