Demystifying Custom Extensions in RISC‑V SoC Design
Talk of configuring a base processor or adding custom extensions to resolve hardware‑software design tradeoffs in a system‑on‑chip (SoC) has long been a cornerstone of the RISC‑V community’s value proposition. Recent articles—such as A guide to accelerating applications with just‑right RISC‑V custom instructions—highlight the tangible benefits of extending the open‑source ISA.
Today’s AI and machine‑learning SoCs are constrained by power, performance and die area, making ISA extensions an often overlooked but powerful lever for meeting these demands.
A recent Quantum Leap Solutions webinar, titled RISC‑V Flexibility – The Power of Custom Extensions, aimed to demystify how the RISC‑V ISA can be tailored for modern SoC design. The session attracted more than 100 attendees and featured a panel of industry experts.
The discussion focused on how a modified ISA can accelerate emerging applications and how designers evaluate extensions for the RISC‑V architecture.
Moderated by Mike Ingster, founder and president of Quantum Leap Solutions, the panelists were:
- John Min, Director of FAE at Andes Technology
- Larry Lapides, Vice President of Sales at Imperas Software
- Taek Nam, Senior Application Engineer at Mentor
Mike Ingster: Why is RISC‑V a better option for AI and ML learning applications than other CPU architectures?
John Min: RISC‑V was conceived as AI became mainstream, and its vector instruction set was designed with AI and VR workloads in mind.
Larry Lapides: In an AI system with a multiplex array of heterogeneous processors—an architecture we’ve seen implemented on multiple occasions—a designer can combine scalar and vector cores. RISC‑V lets you strip unnecessary features while keeping vector instructions, delivering superior price‑performance‑area (PPA).
Ingster: Can RISC‑V cores with vector extensions also be extended with custom instructions?
Min: Yes. A typical example is a vector core that incorporates customer extensions to add extra buffers, preventing stalls due to data starvation.
Lapides: Both Imperas and Andes have customers who have implemented such custom functions to enhance performance.
Ingster: In a multiprocessor array, do all CPUs need the same configuration, or can they have different custom instructions?
Lapides: Heterogeneous processors are common. Custom instructions can vary across cores, and even the base instruction set can differ—one core may support vector instructions while another remains scalar, allowing interleaved execution.
Min: In a multiplexed array, both CPUs and simulators can be heterogeneous, enabling efficient multicore simulation.
Ingster: Do you see a server‑class RISC‑V processor on the market?
Min: The trajectory is clear: higher frequency, wider pipelines, out‑of‑order execution, and massively parallel multicores. These elements are already appearing in next‑generation RISC‑V cores for AI enterprise workloads.
Lapides: Historically, RISC‑V started with 32‑ and 64‑bit proof‑of‑concept SoCs. About two years ago we saw a surge of security‑focused and IoT SoCs. Today we’re witnessing application‑class processors—64‑bit RISC‑V cores running real RTOS and early out‑of‑order pipelines. The first server‑class chips are expected within the next two to three years.
Ingster: Can custom extensions provide device security and lifecycle management?
Min: Absolutely. A custom instruction can perform runtime checks to enforce software compatibility, or a dedicated memory port can isolate critical data, enhancing security and lifecycle control.
Taek Nam: Designers can also leverage Tessent embedded analytics IP to embed security and lifecycle management directly into SoCs with custom extensions.
Ingster: Why is the RISC‑V approach to custom instructions different from other vendors such as ARC or Tensilica?
Min: RISC‑V’s formal extension mechanism allows custom instructions to be formally incorporated into the standard, providing a clear path to broader adoption.
Ingster: How can a developer share custom instructions with customers or partners?
Min: Three options exist: keep it proprietary, submit it to RISC‑V International for inclusion (e.g., Andes’ DSP extensions that became P‑extensions), or share the intrinsics and header files while protecting the implementation under an NDA.
Lapides: A custom extension model can be offered as open‑source or binary, or as an instruction‑accurate processor model, facilitating early pre‑silicon software development.
Ingster: Will custom extensions lead to compatibility issues within the RISC‑V Community?
Min: Proprietary extensions remain isolated; if an extension gains widespread use, it can be proposed for inclusion in the base ISA, preventing fragmentation.
Ingster: Is the vector processor supported by a compiler in its current version?
Min: The RISC‑V Vector Extensions 0.8 are stable and supported by GCC and LLVM. Final compiler support will follow the official specification finalization.
Ingster: How are custom extensions supported in the software tool chain?
Lapides: Designers embed extensions into their RISC‑V models, generating instruction‑set simulators for software development. GCC and LLVM can be extended to support new instructions, and designers may provide macro implementations.
Ingster: How is the compiler made aware of a newly created custom instruction?
Lapides: The software team must implement the instruction in the compiler’s backend.
Min: Alternatively, treat the instruction like a library: provide a header file and an optional proprietary implementation, minimizing development effort while still gaining performance.
Ingster: What OSs are supported on RISC‑V, and is there support for Android?
Lapides: Linux, Zephyr RTOS, and other real‑time operating systems are primary targets. Android has not been officially verified on RISC‑V, but porting efforts are underway. The RISC‑V International website lists all supported OSes.
Ingster: How are custom extensions verified?
Lapides: Verification involves adding extensions to RTL and the processor model. Instruction‑set simulators enable early software validation.
Taek: A recent customer used our embedded analytics IP to trace and quantify performance gains from custom instructions, confirming the benefits through data‑driven validation.
Ingster: How much area is required to add the embedded analytics IP?
Taek: Typically around 1 % of the die, depending on configuration. Each design is evaluated to optimize area while delivering maximum value.
These highlights are drawn from the full panel discussion, hosted by Quantum Leap Solutions.
Embedded
- Mastering Product Design: Proven Strategies to Overcome Common Challenges
- Embedded System Design: Steps, Principles, and Real‑World Applications
- Accelerate Your Applications with Tailored RISC‑V Custom Instructions
- Why Companies Are Building Custom Voice Agents to Secure Data and Drive Automation
- Optimizing Grounded Coplanar Waveguide RF Feedlines for Enhanced Wi‑Fi Performance
- RISC‑V Based Open‑Source GPU Architecture (RV64X)
- Altium 365: The First Cloud Platform for PCB Design, Empowering Collaboration and Accelerating Time‑to‑Market
- Why Antenna Placement Matters: Redesigning the Wireless PCB Lifecycle
- How Tool Innovation Simplifies Vision AI for Embedded Systems
- Updated Guide: Designing Custom Enclosures for ATX, Mini‑ITX, and Micro‑ATX Motherboards