top of page

Neural Processing Unit (NPU) IP



Discover a deep learning accelerator that accelerates inferencing computation with excellent efficiency and unmatched compute density.

Get to know the ORBIT Memory Subsystem IP that consists of an interconnect, memory controller, and PHY IPs that work in unison to create maximum system synergies. 



Features a highly optimized network model compiler that reduces DRAM traffic from intermediate activation data by grouped layer partitioning and scheduling. ENLIGHT is easy to customize to different core sizes and performance for customers' targeted market applications and achieves significant efficiencies in size, power, performance, and DRAM bandwidth, based on the industry's first adoption of 4-/8-bit mixed-quantization. 

ENLIGHT HW Architecture Diagram_Horizon.png

Performs various operations of deep neural networks such as convolution, pooling, and non-linear activation functions for edge computing environments. This NPU IP far surpasses alternative solutions, delivering unparalleled compute density with energy efficiency (power, performance, and area).

Hardware Key Advantages

Mixed Precision (4-/8-bit) Computation

  • Higher efficiency in PPAs (power, performance, and area), DRAM bandwidth

Deep Neural Networks (DNN)-optimized Vector Engine

  • Better adaptation to future DNN changes

Scale-out with Multi-core

  • Even higher performance by parallel processing of DNN layers

Modern DNN Algorithm Support

  • Depth-wise convolution, feature pyramid network (FPN), swish/mish activation, etc. 

HW Advantages
SW Advantages

Software Key Advantages

High-level Inter-layer Optimization

  • Grouped layer partitioning and scheduling for reducing DRAM traffic from intermediate data​


DNN-layers Parallelization

  • Efficiently utilize multi-core resources for higher performance

  • Optimize data movements among cores

Aggressive Quantization

  • Maximize use of 4-bit computation capability

ENLIGHT Toolkit Overview

NN Converter

  • Converts a network file into internal network format (.enlight)

  • Supports ONNX (PyTorch), TF-Lite, and CFG (Darknet)


NN Quantizer

  • Generates quantized network: float to 4-/8-bit integer​

  • Supports per-layer quantization of activation and per-channel quantization of weight


​NN Simulator

  • Evaluates full precision network and quantized network

  • Estimates accuracy loss due to quantization

NN Compiler

  • Generates NPU handling code for target architecture and network


ENLIGHT Toolkit Applications

  • Person, Vehicle, Bike, Traffic Sign Detection

  • Parking Lot Vehicle Location Detection & Recognition
  • License Plate Detection & Recognition

  • Detection, Tracking, and Action Recognition for Surveillance



ENLIGHT Toolkit is available to all eligible companies with the following items:

  • RTL design for synthesis

  • User guide

  • Integration guide

bottom of page