top of page

OPENEDGES Unveils ENLIGHT Pro: A High-Performance NPU IP Quadrupling its Previous Generation’s Performance

Seoul, South Korea, April 16th, 2024 --- OPENEDGES Technology, Inc. (OPENEDGES, KOSDAQ: 394280), a total memory subsystem IP provider, is thrilled to announce the launch of ENLIGHT Pro. This state-of-the-art inference neural processing unit (NPU) IP outperforms its previous generation, ENLIGHT (or ENLIGHT Classic), by fourfold, making it an ideal solution for high-performance edge devices including automotive, cameras, and more. ENLIGHT Pro is meticulously engineered to deliver enhanced flexibility, scalability, and configurability, enhancing overall efficiency with a compact footprint.


ENLIGHT Pro supports the transformer model, a key requirement in modern artificial intelligence (AI) applications, particularly Large Language Models (LLMs). LLMs are instrumental in tasks such as text recognition and generation, trained using deep learning techniques on extensive datasets. The automotive industry is expected to adopt LLMs to offer instant, personalized, and accurate responses to customers’ inquiries.


Steven Kang, an NPU engineer, at OPENEDGES Technology, focuses on his tasks in the office

ENLIGHT Pro sets itself apart by achieving 4096 MACs/cycle for an 8-bit integer, quadrupling the speed of its predecessor, and operating at up to 1.0GHz on a 14nm process node. It offers performance ranging from 8 TOPS (Terra Operations per Second) to hundreds of TOPS, optimized for flexibility and scalability. ENLIGHT Pro supports tensor shape transformation operations, including slicing, splitting, and transposing, and also supports a wide variety of data types --- integer 8,16, 32, and floating point (FP) 16 and 32 --- to ensure flexibility across computational tasks. The vector processor achieves a 16-bit floating point 64 MACs/cycle and includes a 32x2 KB vector register file (VRF). Additionally, single-core, dual-core, and quad-core with scalable task mappings such as multiple models, data parallelism, and tensor parallelism are available.


ENLIGHT Pro incorporates a RISC-V CPU vector extension with custom instructions. This includes support for Softmax and local storage access, enhancing its overall flexibility. It comes with a software toolkit that supports widely used network formats like ONNX (PyTorch), TFLite (TensorFlow), and CFG (Darknet). ENLIGHT SDK streamlines the conversion of floating-point networks to integer networks through a network compiler and generates NPU commands and network parameters via a network compiler. Explicitly, ENLIGHT Pro has already succeeded in securing customer upon its launch.


Renowned for its excellence in advanced memory subsystem IP solutions, including NoC, DDR controller, and DDR PHY, OPENEDGES has established a significant milestone in the global market, with its IP being licensed over 60 SoC products worldwide. In the realm of NPU design, achieving high throughput and power efficiency has always been paramount. However, the evolving landscape of NPU performance now identifies memory bandwidth as a crucial bottleneck, extending beyond the AI processing capabilities. As datasets become increasingly vast, the efficiency of processing large volumes of data critically depends on the available memory bandwidth. OPENEDGES’ ENLIGHT Pro is highly optimized in conjunction with OPENEGES’ memory subsystem IPs, resulting in a tightly integrated solution that propels SoCs to achieve exceptionally high bandwidth efficiency.


Jake Choi, NPU Team Leader, developed NPU IP 'ENLIGHT series'

“OPENEDGES is grateful of the outstanding team behind the development of ENLIGHT Pro. Their dedication and commitment have accelerated its launch,” said Jake Choi, NPU team leader at OPENEDGES. “OPENEDGES is actively pursuing ISO26262 Automotive Safety Integrity Level (ASIL) B and higher compliance levels with our memory subsystem IPs, building on our recent attainment of ISO9001:2015 certification.”


bottom of page