Monday, June 16, 2025

AI Co-Processor For Next-Gen Edge Intelligence

- Advertisement -

The AI co-processor boosts NPU performance with over 30% area savings, 20% lower power use, and support for evolving AI workloads across devices.


The Cadence Tensilica NeuroEdge 130 AI Co-Processor is a new class of processor designed to complement any NPU and enable end-to-end execution of the latest agentic and physical AI networks.
The Cadence Tensilica NeuroEdge 130 AI Co-Processor is a new class of processor designed to complement any NPU and enable end-to-end execution of the latest agentic and physical AI networks.

Cadence has introduced the Cadence Tensilica NeuroEdge 130 AI Co-Processor (AICP), a processor designed to work alongside any neural processing unit (NPU). It supports end-to-end processing of modern agentic and physical AI models across automotive, consumer, industrial, and mobile system-on-chips (SoCs). Built on the well-established Tensilica Vision DSP architecture, the NeuroEdge 130 AICP offers over 30% area savings and more than 20% reduction in dynamic power and energy, all without reducing performance. It uses the same software tools, AI compilers, and frameworks as its predecessors, helping speed up development. Several customers are already testing the new processor, and interest is growing.

The Tensilica NeuroEdge 130 AICP features an extensible design that works smoothly with in-house NPUs, Cadence Neo NPUs, and third-party NPU IP. It handles offloaded tasks with higher performance and efficiency compared to earlier application-specific solutions. Building on the power, performance, and area (PPA) strengths of Tensilica DSPs, it offers over 30% area savings and more than 20% lower dynamic power and energy use, while maintaining performance similar to Tensilica Vision DSPs on AI tasks. Key features include:

- Advertisement -
  • VLIW-based SIMD architecture with configurable options for high performance and low power
  • Acts as a control processor, sending instructions and commands to the NPU
  • Optimized instruction set runs non-NPU-friendly tasks like ReLU, sigmoid, and tanh
  • Offers programmability and flexibility to support end-to-end execution of current and future AI workloads

“With the rapid proliferation of AI processing in physical AI applications such as autonomous vehicles, robotics, drones, industrial automation and healthcare, NPUs are assuming a more critical role,” said Karl Freund, founder and principal analyst of Cambrian AI Research. “Today, NPUs handle the bulk of the computationally intensive AI/ML workloads, but a large number of non-MAC layers include pre- and post-processing tasks that are better offloaded to specialized processors. However, current CPU, GPU and DSP solutions involve tradeoffs, and the industry needs a low-power, high-performance solution that is optimized for co-processing and allows future proofing for rapidly evolving AI processing needs.”

For more information, click here.

Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a Senior Technology Journalist at EFY with a deep interest in embedded systems, development boards and IoT cloud solutions.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics

×