DE · Topics ·

CEVA Unveils NeuPro Family of AI Processors

CEVA, Inc. unveils NeuPro, an artificial intelligence (AI) processor family for deep learning inference. 
CEVA, a licensor of signal processing platforms and artificial intelligence processors for devices, unveils NeuPro, an artificial intelligence (AI) processor family for deep learning inference.

NeuPro builds on CEVA’s position and experience in deep neural networks for computer vision applications. This new family of dedicated AI processors offers a step up in performance, ranging from 2 Tera Ops Per Second (TOPS) for the entry-level processor to 12.5 TOPS for the most advanced configuration.

The NeuPro processor line extends the use of AI beyond machine vision to new edge-based applications including natural language processing, real-time translation, authentication, workflow management and other learning-based applications that make devices smarter and reduce human involvement.

NeuPro is a specialized AI processor family for deep learning inference at the edge. NeuPro extends the use of AI beyond machine vision to new edge-based applications. Image courtesy of CEVA, Inc.

“It’s abundantly clear that AI applications are trending toward processing at the edge, rather than relying on services from the cloud,” Ilan Yona, vice president and general manager of the Vision Business Unit at CEVA, comments. “The computational power required along with the low power constraints for edge processing, calls for specialized processors rather than using CPUs, GPUs or DSPs. We designed the NeuPro processors to reduce the high barriers-to-entry into the AI space in terms of both architecture and software.”

The NeuPro architecture is composed of a combination of hardware- and software-based engines coupled for a complete, scalable and expandable AI solution. Optimizations for power, performance and area (PPA) are achieved using a precise mix of hardware, software and configurable performance options for each application tier.

The NeuPro family comprises four AI processors offering different levels of parallel processing:

  • NP500 is the smallest processor, including 512 MAC units and targeting IoT, wearables and cameras.
  • NP1000 includes 1024 MAC units and targets mid-range smartphones, ADAS, industrial applications and AR/VR headsets.
  • NP2000 includes 2048 MAC units and targets high-end smartphones, surveillance, robots and drones.
  • NP4000 includes 4096 MAC units for high-performance edge processing in enterprise surveillance and autonomous driving.
Each processor consists of the NeuPro engine and the NeuPro VPU. The NeuPro engine includes the hardwired implementation of neural network layers among which are convolutional, fully-connected, pooling and activation. The NeuPro VPU is a cost-efficient programmable vector DSP, which handles the CDNN software and provides software-based support for new advances in AI workloads. NeuPro supports both 8- and 16-bit neural networks. The MAC units achieve better than 90% utilization when running.

The NeuPro family, coupled with CDNN, CEVA’s neural network software framework, provides a deep learning solution for developers to generate and port their proprietary neural networks to the processor.

In conjunction with the NeuPro processor line, CEVA will also offer the NeuPro hardware engine as a Convolutional Neural Network (CNN) accelerator.

NeuPro will be available for licensing to select customers in the second quarter of 2018 and for general licensing in the third quarter of 2018.

For more information, visit CEVA.

Sources: Press materials received from the company.

Share This Article

About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering. Press releases can be sent to them via [email protected].

Follow DE