Efficient Stimuli Generation using Reinforcement Learning in Design Verification
Authors: Deepak Narayan Gadde, Thomas Nalapat, Aman Kumar, Djones Lettnin, Wolfgang Kunz, Sebastian Simon
Year: 2024
Source:
https://arxiv.org/abs/2405.19815
TLDR:
The paper introduces a novel approach to design verification using Reinforcement Learning (RL) to generate efficient stimuli for achieving maximum code coverage of the Design Under Verification (DUV). It presents a metamodeling framework to create a generic System Verilog testbench for a given DUV and a configurable RL environment to utilize various RL policies and create RL actions for the DUV. The study compares the proposed approach with other relevant research, showcasing the significance of the design-agnostic and configurable framework. Results demonstrate that RL-guided verification requires fewer stimuli than conventional random simulations to attain the threshold coverage, with the PPO-based RL agent often exhibiting superior performance compared to DQN and A2C agents. The paper also discusses the integration of the RL environment into the simulation environment and provides detailed insights into the RL core components and their analogy in the proposed work. Additionally, it explores various RL models and reward schemes, highlighting the potential of automation methodologies utilizing metamodeling and RL for design verification purposes.
Free Login To Access AI Capability
Free Access To ChatGPT
The paper introduces a novel approach to design verification using Reinforcement Learning (RL) to generate efficient stimuli for achieving maximum code coverage of the Design Under Verification (DUV), presenting a metamodeling framework to create a generic System Verilog testbench and a configurable RL environment, and comparing various RL models and reward schemes to streamline the verification process.
Free Access to ChatGPT
Abstract
The increasing design complexity of System-on-Chips (SoCs) has led to significant verification challenges, particularly in meeting coverage targets within a timely manner. At present, coverage closure is heavily dependent on constrained random and coverage driven verification methodologies where the randomized stimuli are bounded to verify certain scenarios and to reach coverage goals. This process is said to be exhaustive and to consume a lot of project time. In this paper, a novel methodology is proposed to generate efficient stimuli with the help of Reinforcement Learning (RL) to reach the maximum code coverage of the Design Under Verification (DUV). Additionally, an automated framework is created using metamodeling to generate a SystemVerilog testbench and an RL environment for any given design. The proposed approach is applied to various designs and the produced results proves that the RL agent provides effective stimuli to achieve code coverage faster in comparison with baseline random simulations. Furthermore, various RL agents and reward schemes are analyzed in our work.
Method
The paper introduces a novel method for design verification, leveraging Reinforcement Learning (RL) to generate efficient stimuli for achieving maximum code coverage of the Design Under Verification (DUV). It presents a metamodeling framework to create a generic System Verilog testbench and a configurable RL environment, and compares various RL models and reward schemes to streamline the verification process. The approach is design-agnostic and configurable in terms of learning policy, reward scheme, and target coverage type, showcasing the potential of automation methodologies utilizing metamodeling and RL for design verification purposes. The results demonstrate that RL-guided verification requires fewer stimuli than conventional random simulations to attain the threshold coverage, with the PPO-based RL agent often exhibiting superior performance compared to DQN and A2C agents.
Main Finding
The main finding of this paper is the introduction of a design-agnostic framework leveraging Reinforcement Learning (RL) for design verification, which streamlines the verification process by automating the setup of RL environments and generating customized testbenches according to the design being verified. The results confirm that RL-guided verification requires a reduced number of stimuli relative to conventional random simulations to attain the threshold coverage, with the PPO-based RL agent often exhibiting superior performance compared to DQN and A2C agents. This approach showcases the potential of automation methodologies utilizing metamodeling and RL for design verification purposes.
Conclusion
The conclusion of this paper is the introduction of a design-agnostic framework leveraging RL for design verification purposes, automating the setup of RL environments and generating customized testbenches according to the design being verified, streamlining the verification process. Results confirm that RL-guided verification requires a reduced number of stimuli relative to conventional random simulations to attain the threshold coverage, with the PPO-based RL agent often exhibiting superior performance compared to DQN and A2C agents.
Keywords
Reinforcement Learning, Design Verification, Coverage, Metamodeling, System Verilog, Simulation, Stimuli Generation, Code Coverage, RL Agents, Reward Schemes, Testbench Creation, RL Environment, Constrained Random Verification, Coverage Closure, Semiconductor Technology, Verification Methodologies, OpenAI Gym, RL Core Components, Test Selection, Unsupervised Learning, Functional Coverage Improvement, Simulation Speedup, Deep Neural Network, Decision Tree, Actor-Critic Model, FPGA Verification, ASIC Verification, SoC Verification, Integrated Circuit Design, Machine Learning, Artificial Neural Network, Design Automation, Verification Closure, Functional Verification, RTL Design, Design IP, FSM Coverage, JTAG TAP, ALU, CORDIC, RISC-V, FIR, FIFO, XML, VHDL, Verilog, Python, Client-Server Application, Direct Programming Interface, DPI.
Powered By PopAi ChatPDF Feature
The Best AI PDF Reader