In-Depth Analysis of SoC Chip Design Verification

In-Depth Analysis of SoC Chip Design Verification

Source: Content from Automotive Electronics and Software, Author: Zheng Wei, Thank you!

Introduction

The functional safety of chips was once a very niche field, attracting only a few chip and system developers in the automotive, industrial, aerospace, and similar markets. However, with the rise of various applications in the automotive industry over the past few years, this situation has changed dramatically. Moreover, many other industries can also benefit from the increase in electronic devices, with functional safety being a major prerequisite. This article discusses SoC chip design verification, verification plans and strategies, and verification methods. It defines important terms used in functional simulation, functional coverage, code coverage, and design verification. The article also covers FPGA verification and its role in SoC verification.

01

The Importance of Verification

//

Due to the high costs of development and manufacturing, achieving success on the first attempt is the primary requirement for SoC design. For decades, efforts have been made to improve the effectiveness and efficiency of SoC verification. The most effective metric for assessing verification efficacy is the number of times the SoC design passes field tests. Furthermore, the increasing complexity of SoC designs necessitates effective verification methods to improve these statistics. The demand for effective verification of ASIC/SoC designs emerged in the year 2000, leading to the development of various innovative methods to verify SoC designs, although there is still much room for improvement.

SoC verification is the process used to confirm the functional correctness of SoC designs. The typical SoC design cycle (from specification to design tape-out) varies from six months to three years, depending on technology, system complexity, and the availability of building blocks for the SoC design. The manufacturing process, packaging, ATE testing, and obtaining engineering samples for field verification (when chips are delivered to customers for product testing) usually take about six months. Therefore, all SoCs can only be used for production after the engineering samples have been verified in predetermined product use case scenarios. If successful, the SoC design is considered for mass production. This constitutes a success in design. Any failure in any step of the development cycle can exponentially impact design time, sometimes requiring new metal tape-outs and design revisions. Another driving factor for achieving success on the first attempt is the manufacturing cost associated with nanotechnology. The typical manufacturing cost of a 36 square millimeter chip design using 40nm CMOS FinFET technology is approximately $800,000 to $1,000,000. The high non-recurring engineering (NRE) and manufacturing costs incurred during the development of SoC engineering samples will be amortized in mass production. Therefore, if NRE demands require multiple tape-outs for engineering samples, it can significantly impact business viability, making it commercially unfeasible. Hence, achieving success on the first attempt is absolutely essential in system-on-chip development.

The feasibility of SoC design verification depends on identifying a set of “most common use case scenarios” during the pre-silicon phase. This is a complex and challenging phenomenon, as there can be countless use case scenarios. For instance, one can easily imagine the numerous use case scenarios for a smartphone mobile SoC, whose primary function is making phone calls. However, smartphones are used for many applications, such as messaging, shopping, health tracking, banking, and infotainment. There will be a vast number of application scenarios to test and validate the mobile SoC’s assumptions for these use cases. During the development cycle, as the design transitions from one phase to the next, the cost of debugging SoC issues increases tenfold. In other words, the cost of verification during the design phase is ten times lower than that of verifying the same functionality during the chip phase, and ten times lower than the cost of verification at the customer site. This is because designers have greater debugging authority and tool support during the design phase compared to the advanced stages of development. Therefore, in the pre-silicon phase, a set of key scenarios closely resembling actual application use cases will be identified and targeted to gain good confidence in achieving first-time success with the SoC. SoC design is completed by integrating different types of design modules or IP cores (soft and hard cores), which further complicates design verification. The SoC design process also includes a series of design transformations from RTL to netlist to layout, which then converts to mask data. As the design undergoes these transformations, it is necessary to verify that the design intent is preserved at all levels until completion.

In summary, the reasons why verification is so important for SoC design are as follows:

1

Excessively high manufacturing costs necessitate first-time success, as multiple iterations may render it commercially unviable.

2

As design progresses through the development cycle, verification costs increase tenfold. Therefore, early verification will enhance confidence in achieving correct SoC design on the first attempt.

3

Since SoC design involves a series of transformations on the database using EDA tools, it is necessary to verify that these transformations are correctly implemented, which requires verification.

02

Verification Plans and Strategies

//

For the successful first-time manufacturing of SoC designs (when the SoC works as expected on its first attempt), it is important to adopt many verification methods during the design phase. The different types of verification techniques used are based on simulation verification, formal verification, timing verification, FPGA verification, and hardware simulation and verification. Simulation verification was the only technique used in the past. However, as systems have become increasingly complex, it is necessary to use all possible methods to verify SoC designs.

It is difficult to define the conditions for completing design verification, as it is nearly impossible to simulate all design scenarios of an SoC design. Consider an example of a design with a single trigger having two states: the number of test patterns required to test the trigger is 4, and the ARM Cortex M4 core, which uses 65nm technology, has 65K gates that can have multiple input outputs. To simplify the discussion, assume all gates have only two states, imagine the number of patterns required to test the ARM Cortex M4 core, which would be 65 × 1000 × 4 = 0.26 million patterns. Simulating all of these (without considering the issues of accessing them from the primary input-output, finding test patterns for each of them, etc.) is practically impossible at different design stages.

At the system level, identifying all scenarios is also very challenging. This may be due to the inability to predict and visualize the use case scenarios themselves, or it may be because more models or modules are needed in the environment to realize it. It could be a complete software stack, or a hardware platform to port the entire design database, or the computational system infrastructure for simulation. Therefore, achievable scenarios need to be defined as the verification test environment and a set of test cases, forming the scope of pre-silicon verification. This can be achieved in various ways:

  • Top-down approach

  • Bottom-up approach

  • Platform-level verification

  • System-level or transaction-level verification (TLV)

Top-Down Approach

In this approach, the SoC is verified starting from the highest level of the interface hierarchy and then continues down to the next lower level until the smallest leaf-level design elements are verified. Traditionally, this approach has been used as a verification plan when SoC designs have one or two levels.

Bottom-Up Approach

This is the most commonly used method in design verification. It starts by verifying smaller design blocks; verifying small blocks is simple and practical. Additionally, it is easier to find and fix errors in block-level simulations. This is because it is easier to trace signals back and forth in smaller designs, allowing for debugging when problems are discovered. When multiple modules are verified, they are integrated to form the top-level module of the chip, which is verified by separate top-level test setups. For example, the cores in an SoC consisting of a UART core, USB core, and protocol bus interface core are first verified individually and then at the chip top level.

Platform-Level Verification

If the design is based on standards, such as a USB device core, it is best to verify it on a standard platform that has standard peer devices (such as a USB host device) installed. Similarly, the SPI core can be verified on a platform with an SPI master device. This will also confirm interoperability issues.

System Interface Level Verification

If the SoC is based on protocols, it needs to be verified by monitoring the response to transactions, using standard verification IP (Intellectual Property) cores to build the verification setup. For example, in an environment with WLAN access points, the Wi-Fi device core is verified by observing transactions between the two. The WLAN access point core is a pre-verified and confirmed standard reference verification IP. This also proves the interoperability of the core during manufacturing.

03

Verification Plans

A verification plan is a document that describes the SoC design verification plan and tape-out standards. It explains how to plan the verification of each function of the SoC design. It lists the verification goals at the module level and the hierarchical top level. It identifies the necessary tools, such as simulators, waveform viewers, and scripts for verification. It explicitly mentions the coverage standards for successful verification completion as the completion criteria for design tape-out.

Different verification coverages related to SoC design include:

  • Functional Coverage
  • Code Coverage
  • Finite State Machine (FSM) Coverage

Functional coverage quantifies the number of functions to be verified by writing test cases and setting up simulations to achieve the correct design response. There are some tools that can measure functional coverage through test cases as well as a list of functions (features). Since identifying functions and inputting them into these tools is done manually, the number of functions may be insufficient for achieving high coverage.

Another commonly used parameter is “code coverage,” which determines the number of RTL statements covered by the test cases simulated at the RTL level of the design database. This helps identify redundant code in the design database and aids in code cleanup.

Tools used for code coverage can also provide the states of the finite state machine that the test cases cover. This is a very important measure, as it adds appropriate test cases to cover all state transitions. The design verification scope is used not only to assess the design verification status of some companies but also to evaluate the performance of verification engineers.

The verification plan document lists the criteria for the completeness of design verification based on the design scope. The remaining gap in verification coverage is filled by other verification techniques, such as FPGA-based verification, simulation techniques, and testing SoC designs on development boards. For example, if the functional coverage achieved through simulation is 98%, then the remaining 2% is achieved by porting the design to FPGA and testing the relevant functionality on the FPGA board or any other appropriate testing technique. These methods may require additional circuitry on the board and FPGA to make them suitable for SoC design functionality verification. It may also require the running software or systems interfacing with it.

The main design details included in the verification plan are as follows:

1. Definition of first-time success for SoC design

2. Key application scenarios for the SoC. Requirements for testing environment development for SoC testing

3. Development plan for the functional verification environment and required resources

4. List of functional features to be verified at the module level and design level

5. Main verification strategies for block and top-level design

6. Testing platform modules at the RTL level of the design:

  • Bus Functional Module (BFM) and Bus Monitor

  • Signal Monitor

  • Verification Reference Model

7. FPGA-level verification details:

  • Requirements for SoC verification on FPGA boards

  • FPGA verification platforms require additional modules

  • Required software modules, software development, and debugging platforms will be developed based on this requirement

8. Required verification tools and processes

9. Requirements for block-level simulation environments

10. Regression testing environment and regression testing plan

11. Clear criteria for determining verification completion, such as target coverage, number of regression test vectors, and expectations for gate-level simulation strategies

Design resources include verification engineers and their skills, hardware development boards, FPGA boards, software requirements, EDA tool environments, (workstations and servers) simulators (number of licenses), and design infrastructure for verification. The strategies for verifying VLSI SoCs vary depending on the complexity of the SoC design and use cases.

Ideally, the goal is to use an RTL-level testing platform or FPGA verification plan, or a development board setup, or any or all combinations to simulate/test use case scenarios. Using these resources, verify the SoC design to gain high confidence in predicting its success. The verification strategy includes planning divisions for sub-module level verification and top-level chip verification on FPGA boards.

04

Functional Verification

//

The goal of functional verification is to confirm the expected functionality of the SoC design in functional scenarios and their application scenarios. A use case scenario can map to one or more functional test scenarios. For example, to verify the addition functionality of a module, there may be three test cases: the first is to verify the input operands, the second is to verify the output results corresponding to the inputs, and the third is to check the carry operation of the adder. Essentially, SoC designs contain multiple blocks of different functionalities that interconnect and/or share buses, with multiple blocks interacting on that bus, or one block operating according to standard protocols. In this case, the functional verification of the SoC includes simulating (a) block-to-block interface verification, (b) bus contention verification, and (c) protocol and compliance verification.

05

Verification Methods

//

There are three types of design verification methods: black-box, white-box, and gray-box verification. SoC designs are verified by adopting different combinations of these methods.

Black-Box Verification

This is a verification method where the internal details of the design implementation are not exposed to verification. Verification is accomplished by accessing only the exposed interface signals without accessing the internal states or signals, making the implementation independent. Clearly, verification is blind to the internal implementation details of the design or system state. This method is best suited for discovering issues at the interpretative level, such as byte order checks, protocol misunderstandings, and interoperability testing.

White-Box Verification

In this verification method, the test platform module can access the internal states, signals, and interfaces of the design. In this case, debugging any design issues becomes very easy, as the test platform can track signal drivers according to expectations. This method is best suited for examining low-level implementation-specific scenarios and design corners, where they can be designed and debugged for scenarios with potential issues. An example of such a situation is FIFO pointer wrap-around, counter overflow, etc. In this method, assertions are best suited for checking internal design behavior. This method is entirely complementary to the black-box verification method.

Gray-Box Verification

This method lies between black-box and white-box verification techniques. In this method, the testing environment verifies the system at the interface level, verifies the IOs at the top level, and accesses the internal design for testing and debugging as needed (such as design corners). Typically, the first level of testing uses the black-box method as a target and assesses functional coverage. If necessary, to improve coverage, test scenarios are tested using the white-box method.

06

Verification Design

As SoC design methodologies evolve towards system-level or architecture-level, transaction-level verification of system functionality across subsystems becomes crucial. However, SoC designs mainly integrate pre-designed or pre-verified IP cores, which resemble black-box verification of internal IP. Furthermore, complex SoC designs are trending towards verification-friendly designs, where internal states and key signals are latched and can be read by software via the master interface, allowing for predicting the root cause of issues. This is useful in “black-box” or “gray-box” verification. Functional verification is performed differently in various environments. At the RTL level, test platforms and a set of test cases are developed and simulated using simulators to see if the SoC operates as expected. Functional correctness is checked by observing the waveforms of inputs and outputs at the interface or module/block level.

In FPGA-based hardware verification, the RTL form of the design under test is ported to the FPGA on the board, running limited software, feeding actual stimuli into the SoC inputs, and observing outputs in the development environment.

In the development environment, a sub-module-based development platform is designed to closely resemble the final SoC interface and is verified with some more complex software.

RTL-level testing environments or test benches represent the environments most likely to be used by the SoC. All environments are developed to accept stimuli as close as possible to real-world inputs. A typical RTL test environment (also known as a test bench) is a closed system, as it represents a complete environment, including input stimuli through behavioral bus functional models (BFM) and output control.

Peripheral Modules

These modules are supporting modules required for completing SoC verification in the application environment. They are verification IPs or IPs for peripheral functionalities, such as external memory, data converters, and real-time sensor models.

Input Stimuli and Common Functional Models (BFM)

Input stimuli represent the input signals fed into the SoC from the external world that are verified in actual application scenarios. These can be system design signals, such as clocks from reference oscillators, reset signals, sensor inputs, or data inputs from external modules or verification IPs into the SoC. The generation of stimuli from different sources required by the SoC can be automated (when the reference clock is fed into the PLL module, it automatically generates the system clock frequency required by the SoC, as configured) or semi-automated, with manual triggering or conditions. They are fed into the SoC design through the interface, following the design’s timing requirements via bus functional models (BFM).

Output BFM

The output BFM captures the SoC’s response when specific stimuli inputs are provided through its output interface. The responses are compared and written to files for comparison with expected outputs or real-time checks against expectations. This module can either be a checker with file comparison capabilities or a waveform database generator, while the SoC design is affected by specific scenarios through the testing input conditions, allowing designers to view these input conditions and determine their correctness on a graphical viewer.

Continuous Monitors

These are additional checkpoints in the environment that indicate the normal operation of the SoC. For example, in a timer SoC generating a 1-second clock, it is easy to continuously monitor a 1-ms signal that is expected to tick continuously to produce a 1-second clock. In the testing environment, test blocks are very modular, and results are automatically checked, making pass/fail decisions, thus being automation-friendly. The testing environment is capable of analyzing functional, code, and FSM coverage of the design.

A brief description of the testing environment modules is as follows.

SoC DUT

The SoC DUT is the SoC design under test.

Design and Verification Assertions

The design and verification test environment in the test may have assertions to enhance the effectiveness of verification. Assertions are statements used to check the timing relationships of synchronous signals in the design to ensure modules operate correctly. Design assertions, if supported, are tracked by the test bench checker module to see if they have been triggered and evaluated for correctness. For example, consider a part of a logic design where the functionality is to check whether received packets are correct, and the received packets are validated by the packet_valid signal. Clearly, whenever the packet_correct or packet_error signals are generated, the packet_valid signal should be set high. In this case, it makes sense to write a design assertion to check that packet_error and packet_valid or packet_correct and packet_valid signals do not occur simultaneously. If this assertion is triggered, then the design is defective.

Similar assertions can be written at the transaction level of the DUT to track the correctness of the design.

Clock/Reset Module

The clock/reset module generates the required clock and reset signals according to the SoC design requirements.

Configuration

This module sets up the DUT and the test bench for the configuration needed to test the DUT.

Stimulus Generator

This module generates input stimuli in the test bench. Typically, this module generates signals in the required order and sequence according to the SoC functionality. This can be a complex IP verification.

Bus Functional Module (BFM)

The bus functional module provides stimuli to the SoC DUT according to the interface specifications. The number of bus interfaces corresponds to the number of BFMs. If the SoC design supports UART, USB, and PCI Express interfaces, there should be corresponding BFMs to manage transactions compliant with these protocols.

Mailbox

These are communication mechanisms in the SystemVerilog test bench that allow for message exchange between processes. A process wishing to communicate with another process sends messages to the mailbox, which temporarily stores the messages in a system-defined memory object and then passes the messages to the desired process. The created mailbox can have a bound or unbound queue size. When the bound mailbox reaches the defined maximum number of messages, it becomes full. Processes attempting to place messages into a full mailbox should be suspended until there is sufficient available space in the mailbox queue. Essentially, the mailbox is a technique for synchronizing different processes. The process can be a checker, as illustrated in this example. Once the mailbox has a predefined set of messages, they can trigger a checker to verify the contents and determine their correctness.

Checker

The checker compares all processes, such as comparing DUT responses with expectations, assertions, and monitors, to determine the pass/fail criteria of the test scenarios.

Test Program Interface (TPI)

This is the user interface that accepts user input as parameters and compilation options to trigger test scenarios and execute simulations. TPI supports many commands with optional parameters to execute simulations one by one in test scenarios and generate merged results. This is called regression testing.

07

Verification Examples

In this section, we will present a simulation of a simple decimal counter design to better understand the verification process.

The functionality of the decimal counter design: the decimal counter counts the numbers 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 on each valid edge of the clock. The design requires that an output signal be generated whenever the counter counts to 5.

The design file is saved as decade-counter.v, and the test bench file is saved as tb_dcounter.v (.v represents a Verilog file). A simulator is used to simulate the design file. Most commonly used simulators are cycle-based simulators. Cycle-based simulators sample signals and compute design responses at each clock cycle. The simulator first analyzes the RTL code and elaborates before simulating the design.

During simulation execution, error and warning log messages are displayed on the terminal. If there are any errors/warnings, they need to be corrected in the design file. For the module in the design example, there should be no warnings or errors for the simulation to terminate successfully. If you observe the current working directory, you will find that the simulation run has generated many output files. These include command log files and a waveform dump file named decade_counter.vcd. The decade_counter.vcd file can be opened with waveform viewer tools. When opened in the waveform viewer tool, you can observe the logical state changes of input and output signals and internal networks. For more information on running simulations and using waveform viewer tools, you can refer to the respective user manuals. The design behavior can be verified by observing design signals clock, reset_n, and out_5, count_out.

The verification process can be extended to designs of any complexity. The next design example explained in this section demonstrates this. It illustrates the verification of a self-synchronizing descrambler using a scrambler design as verification IP on the test bench. Consider the self-synchronizing scrambler design with polynomial g(x) = 1 + x13 + x33. If the input data is a long sequence of 0s or 1s with zero DC offset, a self-synchronizing scrambler module is used to scramble the input data in communication. The same polynomial is used at the transmitter to scramble the data, and the same polynomial is used at the receiver to descramble the data to recover the original data sent. The functional attribute of the descrambler in the self-synchronizing descrambler is that it does not need to be initialized by an initialization vector to achieve synchronization.

The synchronization of the scrambler and descrambler is defined as the LFSRs of the scrambler and descrambler maintaining the same pattern, so that when data is fed into the descrambler, it can generate the input of the scrambled data.

The module under test is the descrambler. To test whether the descrambler is synchronized with the scrambler, the descrambler LFSR needs to be reset to any initial value. Random patterns are fed to the descrambler input through the scrambler input, with the scrambled data serving as input stimuli. The verification will check if the descrambler can decode the input data at some point in time. You may notice that the test bench has no ports, as this will be a standalone test module environment.

The test bench consists of the following parts:

1

The first part of the test bench is the stimulus generation, including clock, reset, enable, and data generation.

2

The second part is the scrambler module, which serves as standard verification IP.

3

The third part is module instantiation.

4

The fourth part is the output reader and waveform dump for debugging and user verification.

A typical SoC test bench will have multiple clock (OCC) generation modules and standard PLLs, multiple required VIPs, and control state machines that will enable each of these modules to be used in various test scenarios. The output reader and waveform dump section can be complex modules that can automatically verify the correctness of functionality according to SoC verification requirements.

08

Verification Tools

//

There are many verification tools available for functional verification of SoC designs. They include:

1

Simulators

2

Coverage tools.

Among the tools listed above, simulators are essential for RTL functional verification. A simulator is a tool that studies the design behavior in most expected use case scenarios by using test vectors in the test bench. It is software that can examine the state of the SoC design and its outputs for a required duration when stimuli are provided by the user, referred to as test vectors. There are different types of simulators. They include cycle-based simulators, event-based simulators, and circuit simulators. The SoC design to be simulated is called the device under test (DUT). The simulator uses certain commands in the test bench to monitor and write out internal logic levels, signal states, and input/output during simulation. Then, the waveform output file is opened in a graphical debugging environment with waveform viewer tools. Depending on the type of SoC design, different simulators are used for verification. Cycle-based and event-based simulators are digital simulators. Most simulators used for digital simulation are cycle-based simulators. Cycle-based simulators sample the logical state of the design at each clock cycle. The simulator cycles are on the order of picoseconds or nanoseconds to allow users to virtually simulate the concurrent behavior of hardware. The aforementioned simulators are all cycle-based simulators. They are referred to as cycle-accurate simulators because they sample SoC designs at the input edge of clock signals. Cycle-based simulators are 10-100 times faster than event-based simulators for most SoC design verifications. Design verification using cycle-based simulators requires STA analysis since designs are verified at clock intervals.

Whenever a logical change occurs on any network in the circuit, event-based simulators evaluate the design. These simulators are also called time-accurate simulators, suitable for small circuit-level verifications. They provide a good debugging environment and do not require timing analysis since the design is functionally verified for all events at all nodes in the design. Event-based simulators require large computational machines to run simulations. This is because the number of networks in today’s SoC designs has surged, resulting in a significant number of logical transitions during simulation. Monitoring a large number of logical transitions on networks and evaluating all their combinations is practically impossible. Debugging faults in such designs is very challenging.

Today’s SoC designs include analog modules that also need to be verified. Analog modules are verified separately using analog simulators. Analog simulators use mathematical models to represent the analog functionality of designs. Very few analog and mixed-signal simulators are available.

Simulators are generally very slow and not automated. They require designers to have a good understanding of the design and use tools as aids for analyzing the design. Therefore, detailed verification of analog modules is performed separately, followed by mixed-signal simulation to verify integration in practice. Another important tool for verifying analog processes or modules is the extraction coverage analyzer. Coverage matrices provide insights into the quality or completeness of the validation of the design database. There are three types of coverage: functional coverage, code coverage, and state machine coverage. Functional coverage is obtained by comparing and analyzing the test cases run on the SoC design database against the functional feature list of the SoC design. Code coverage is a metric extracted when running simulations on the SoC design to track the lines of code in the design that are exercised. State machine coverage provides information about state transitions in the design FSM due to test cases during simulation runs. All these matrices help verification engineers maximize coverage metrics to achieve design verification goals.

In addition to the basic syntax and semantics of HDL languages, Lint tools also check SoC designs at the RTL level based on rules set for different objectives. This is a static RTL code checker. It performs checks by compiling the design and preprocessing it for simulation, synthesis, and DFT simulation. The different design objectives for running lint are basic compilations of RTL designs targeted for simulation, synthesizability, and testability. The tool defines standard rules for each target. Each of these rule sets can be customized or enhanced for SoC-specific design objectives. When executed on design files, these tools write log files that provide detailed analyses of the design based on defined rules and issue warnings and error alerts based on the severity of violations. nLint and HAL are two of the few known verification tools used in design centers.

09

Verification Languages

//

Compared to design languages, the languages used for modeling test benches or test cases are more lenient and flexible. The main reason for this flexibility is the need to create more randomness in test cases, which does not need to be synthesizable. Verilog is one of the oldest HDLs and is also a verification language. As design description methods rise to higher levels of abstraction at the architectural level, some verification languages, such as SystemVerilog, Vera, and System C, are becoming the primary hardware verification languages (HVL) at higher abstraction levels. These languages support classes, object-oriented programming, class extensions, and temporal properties, which facilitate the easy definition of system-level or transaction-level test functionalities. Among the mentioned languages, SystemVerilog is increasingly popular as a powerful assertion language, which is a major feature in verification. However, it also provides some constructs designed to ensure consistency between synthesis and simulation results. Furthermore, there are simulation tools that support these language constructs, which can interpret results and analyze based on test coverage. They support interfaces such as Direct Programming Interface (DPI) to high-level software languages like C++ and Java, enabling the construction of graphical user interfaces (GUIs), making the verification environment more generic and effective at higher abstraction levels up to the system level. More details can be found in the language books mentioned in the references. Simulator tools are now smart enough to ignore common mistakes made by designers and can self-correct errors by notifying users of warnings.

10

Automation Scripts

Creating use case scenarios for SoCs is achieved through a set of complex test cases with random stimuli, as real-time scenarios are random. When stimuli are random, the response to such stimuli becomes unpredictable. Therefore, tests are typically conducted in such cases by predicting the final outcome or state or analyzing the statistics and stability of the system, and also at predictable intermediate states of the system. This requires some consistency of input randomness and system states, which are more or less changing. To map out this correspondence, test cases are automated. Automation means that test expectations, such as data integrity, states, and handing control over to the next random scenario, are automatically controlled and evaluated. This is achieved through scripting languages. The most commonly used scripting languages include Perl, Tcl, PHP, etc. Therefore, scripting languages are programming languages written for special runtime environments that automatically execute tasks that would otherwise be executed by users one by one. EDA tools also understand these structures and can integrate them into testing setups. Automation is also used for analyzing the integrity checks of big data, statistical analysis, and batch running test cases to achieve expected functional coverage. Test scripts are interpreted rather than compiled.

11

Design Verification

Design verification ensures design quality by discovering potential errors in system design and architecture. This can only be achieved by thoroughly simulating all functionalities of the system while carefully examining any possible erroneous behaviors. This warrants the most time, attention, and comprehensive knowledge of design use cases. In most cases, this can become very challenging if the design is complex. This requires that the design be verifiable. Designers must have a complete understanding of the functional implementation of the design. If designers identify the key design corners and states of the design, verification can be targeted to monitor and check them. Sometimes in the design, certain scenarios may require long simulation runs to reach design limits, and verification engineers may not be aware of this. A simple example is the overflow generation of a 32-bit counter running on a one-second clock. This requires a long time to simulate, but may occur quickly in actual hardware. In such cases, if designers provide functionality to preload the counter, the 32-bit counter overflow becomes feasible. Such design adjustments allow the design to be verified in more scenarios. Designers must identify key design corners that can be verified. Furthermore, non-functional characteristics of the system, such as scalability, extensibility, and flexibility, require additional design support for verification. Examples of such cases include memory address expansion, writing to registers or memories in non-default modes via software access, and additional configurations provided as alternatives for potential error interpretation issues (e.g. little-endian or big-endian).

12

Assertions in Verification

//

Implementing assertions in designs requires a conscious decision to view the design process differently. This is an additional design and verification statement used to monitor the correctness of this part of the design. Assertions will certainly reduce debugging time and workload. Assertions essentially serve as early warnings during simulation, identifying potential issues that could directly lead to test failures or undetected failures in tests. Assertions on module interfaces can quickly identify invalid behaviors that may be caused by improper use of behavioral models or designs (invalid register settings, invalid operating modes, etc.). This assertion failure indicates that there may be issues with the test bench, helping verification engineers fix problems in the test bench. It helps resolve issues in the design. Design assertions help locate the root cause of failures by examining the incorrect functionality indicated by the failures. For instance, constraints in random simulations may detect design issues at design corners where FIFO operations overflow or underflow, which are usually not targets of directed testing. Simple assertions on FIFO interface signals can detect simultaneous read and write operations, the number of reads exceeding the number of writes, etc. This will assist in identifying the actual root cause of failures in test scenarios without lengthy debugging sessions. The greater advantage of assertions is that they retain design intent outside of the design and verification owners, making design and verification test benches reusable.

13

Verification Reuse and Verification IP

//

Just like reusable design modules, verification modules can also be reused across generations of SoC designs. Since multiple interface protocol modules are part of the SoC, such as some SoCs with multiple USB cores, multiple SPI cores, and multiple UART cores, corresponding test modules can be reused in the test bench. The bus interface modules (BFM) and interface cores in the test bench can even be used to verify multiple SoCs with the same functionality. This will also address the issues of shortened time-to-market and design productivity gaps. As SoC functionalities become increasingly complex, many integrated cores comply with many standards and require interoperability, these modules have been developed over the past decades as reference models to ensure compliance with standard specifications. These are known as verification IPs. These have been pre-verified or certified to comply with standard or protocol specifications. They can be licensed or purchased from intellectual property developers. These VIPs are integrated into the test environment as standard IPs, and the SoC is tested against the verification IP to demonstrate compliance and interoperability. The reuse of verification IP is a common practice in SoC verification.

14

Universal Verification Methodology (UVM)

Universal Verification Methodology (UVM) is an industry-standard verification methodology used to define, reuse, and improve verification environments while reducing verification costs. It provides certain application programming interfaces (APIs) for the use of basic class library (BSL) components in verification environments, making them reusable and tool-independent. UVM-based verification environments are flexible enough for creating various types of tests, coverage analysis, and reuse. UVM standardization improves interoperability, reduces re-purchase costs, and eliminates the need to rewrite intellectual property (IP) for each new SoC design or verification tool, making it easier to reuse verification components. Overall, UVM standardization will reduce verification costs across the industry and improve design quality. More importantly, it can be implemented using SystemVerilog, which is the most commonly used language in complex SoC design verification.

15

Defects and Debugging

Bugs are defects in the system. The quality of SoC design directly depends on the hidden defects or errors within it. As mentioned earlier, the testing costs at higher design or development stages (RTL, physical design, layout, chip, circuit board, system, field system) are at least ten times higher compared to testing costs at lower design or development stages. It is wise to detect defects or errors at early design/development stages. Bugs are unwanted states or conditions in specific scenarios. They can be temporary or permanent. These can arise from various reasons. The main reason is that designers are unable to interpret requirements as intended (refer to the famous tree swing example regarding requirements interpretation issues), as well as numerous implicit, undeclared requirements. Design defects may also seep in due to the verification personnel’s interpretation of system requirements and their ability to create test cases for the entire use case scenario. Additionally, human errors and tool errors during design transformations can cause issues. It is crucial to formally document and manage bugs during the design and development phases or in the field so that they can be fixed and do not recur.

E课网(www.eecourse.com) is a professional integrated circuit education platform under Moore Elite, dedicated to cultivating high-quality professional talents in the semiconductor industry. The platform is guided by the job requirements of integrated circuit companies, providing a practical training platform that aligns with the corporate environment, quickly training students to meet corporate needs through online and offline training methods.

E课网 has a mature training platform, a complete curriculum system, and a strong faculty, planning a fine course system of 168 courses covering the entire integrated circuit industry chain. It has four offline training bases. To date, a total of 15,367 people have been deeply trained, directly supplying 4,476 professional talents to the industry. It has established deep cooperative relationships with 143 universities and has held 240 corporate IC training sessions.

Leave a Comment