Radiant / Third-Party Simulation Tools: General Guidance on Library Compilation Methods and IP Generation

Radiant / Third-Party Simulation Tools: General Guidance on Library Compilation Methods and IP Generation

Description:

This FAQ complements Application Note FPGA-AN-02084 by offering practical guidance for using Synopsys VCS and Cadence Xcelium in Lattice FPGA simulations. It expands on the original note with real-world examples involving Lattice IPs and focuses on key stages like library compilation, elaboration, and functional simulation. Designed to address common issues and share best practices, the FAQ helps teams streamline workflows, enhance verification coverage, and ensure smooth integration of third-party tools into their simulation environments.

Method 1: Library compilation using cmpl_libs

Although VCS and Xcelium are compatible with Lattice designs, they are not part of Lattice’s officially recommended simulation environment, which is Questasim Lattice Edition. Many users still opt for VCS or Xcelium due to existing workflows or project-specific needs. Lattice Radiant’s flexible architecture supports integration with various industry-standard simulators, enabling users to continue using their preferred tools while benefiting from Radiant’s features. As detailed in Application Note FPGA-AN-02084, simulation libraries can be compiled using the cmpl_libs TCL command, either via: the Radiant GUI’s TCL Console, or the Radiant Command-Line Interface (radiantc).
Users must specify the simulator, device family, and software version to ensure proper library generation. For full instructions, refer to Radiant Help > Running cmpl_libs.tcl from the Command Line.

cmpl_libs*

-sim_path

-sim_vendor

-device**

-64 ***

-target_path

For VCS

<vcs_installation>/synopsys/VCS/<vcs_version>/bin

synopsys

{iCE40UP|lifcl|lfcpnx|lfd2nx|lfd2nx1|lfd2nx2|lfmxo5|lfmxo5t|lfmxo5t1|lav_ate_es|lav_atg_es|lav_atx_es|lav_ate_b|ln2_mh|ln2_ct|lav_ate|lav_atg|lav_atx|ut24c|ut24cp|all<default>

-64

<user_defined>

For Xcelium

<xcelium_installation>/<xcelium_version>/Linux/tools.lnx86/bin

cadence

{iCE40UP|lifcl|lfcpnx|lfd2nx|lfd2nx1|lfd2nx2|lfmxo5|lfmxo5t|lfmxo5t1|lav_ate_es|lav_atg_es|lav_atx_es|lav_ate_b|ln2_mh|ln2_ct|lav_ate|lav_atg|lav_atx|ut24c|ut24cp|all<default>

-64

<user_defined>

      
Notes:
○ No Environment Variable Configuration Required*: This method of compiling simulation libraries using the cmpl_libs TCL command does not require manual setting of environment variables. The tool internally handles the proper mapping and organization of the generated library files, streamlining the setup process and reducing the risk of misconfiguration.
○ Device List May Vary** :The list of supported devices available for library compilation may differ depending on the specific version of Lattice Radiant and the release cycle of Lattice FPGA families. As Lattice continues to introduce new devices and update existing ones, users should always refer to the latest Radiant documentation or device support list to ensure compatibility with their target FPGA.
○ -64 option*** This option instructs the simulator to run in 64-bit mode which allows it to handle larger memory spaces and datasets compared to 32-bit mode. It's useful for simulations involving large designs or extensive testbenches improving performance and stability.
○ -lang optionFrom the old application note, the -lang option is no longer applicable in recent Radiant versions and may be ignored during execution.

After invoking the cmpl_libs command, output library and setup file specific to target device and target simulator are produced. These files can be used depending on the two approaches below:

      a. VCS: using synopsys_sim.setup:  The synopsys_sim.setup file is a key configuration resource for Synopsys VCS simulations, centralizing library and design unit definitions to simplify simulation management. It helps avoid the need to manually specify each file on the command line, making it ideal for handling complex projects. When generated via cmpl_libs, VCS automatically detects this file in the working directory, allowing for cleaner and more efficient simulation commands. For detailed usage, refer to Radiant Help > Performing Simulation with Synopsys VCS.

      b. Xcelium: using -reflib option: The -reflib option in the xrun command specifies a read-only reference library containing pre-compiled design units like modules and IP blocks. This improves efficiency and avoids re-compilation, especially in large or collaborative projects. Typically used with libraries generated by cmpl_libs, it ensures safe access during elaboration without risk of modification. For detailed instructions, refer to Radiant Help > Performing Simulation with Cadence Xcelium.

Method 2: Manual Setting of FPGA libraries for VCS and Xcelium

While cmpl_libs automates simulation library compilation, designers can manually reference libraries from Radiant’s cae_library/simulation directory to save memory by reusing existing files. However, this method may lead to slower simulations due to repeated recompilation. To use this manual approach, the FOUNDRY environment variable must be set to point to the Radiant installation path:
  1. $ setenv FOUNDRY <radiant_installation>/<radiant_version>/ispfpga

This ensures proper access to Lattice-specific IPs and models. For setup details, refer to Radiant Help > Setting Up the Environment to Run Command Line.

VCS

Xcelium

vcs \

 

+incdir+<radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/ \

+incdir+<radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/ \

+incdir+<radiant_installation>/<radiant_version>/ip/pmi/ \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/<device_architecture>.f \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/<uaplatform | applatform>.f \

-f <radiant_installation>/<radiant_version>/ip/pmi/pmi.f \

 

xrun \
 

-incdir <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/ \

-incdir <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/ \

-incdir <radiant_installation>/<radiant_version>/ip/pmi/ \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/<device_architecture>.f \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/<uaplatform | applatform>.f \

-f <radiant_installation>/<radiant_version>/ip/pmi/pmi.f \

Note:

  1. uaplatform = Nexus Devices ; applatform = Avant and Mach-N2 devices
  2. -error=noMPD means missing port declaration (VCS) or -nowarn UIMPTD (Xcelium)

Guidance on Server IP installation and IP Generation

Lattice IP cores are categorized into Foundation IPs and Server IPs: Foundation IPs are built into Radiant and located in the ip/ directory. They’re license-free and ideal for quick prototyping and general use. Server IPs require a license and offer advanced features suited for production-level designs. For details on IP generation and management, refer to Radiant Help > IP Generation.

      Syntax for IP Generation (ipgen) command:
           
     
       Syntax for generationg Foundation IPs:
  1. $ ipgen -o <user_defined_path> -ip <radiant_installation_path>/lscc/radiant/<version>/ip/<architecture>/<ip_name> -name <user_defined_name> -a <device_family> -p <target_device> -t <device_package> -sp <speed_grade> -op <operating_condition>
      Syntax for generating Server IPs from IP repository:
  1. $ ip_catalog_list –server
  2. $ ip_catalog_install –vlnv latticesemi.com:ip:<ip_name>:<version>
  3. $ ipgen -o  <user_defined_path> -ip <user_defined_path>/RadiantIPLocal/latticesemi.com_ip_<ip_name>_<version> -name <user_defined_name> -a <device_family> -p <target_device> -t <device_package> -sp <speed_grade> -op <operating_condition>
Note: If user has pre-defined set of parameters that are valid for IP configuration, user may use -cfg <configuration_name>.cfg to customize the IP.

Creation of Input File list and structure for invoking 'vcs' and 'xrun' commands

Beyond library setup via cmpl_libs or manual referencing, additional command-line options are critical for building complete and error-free simulation commands in VCS or Xcelium. These switches configure the simulation environment to meet specific design needs.
Key options include flags for include paths, macros, library linking, time resolution, debugging, and waveform generation. Proper use ensures smooth elaboration and execution, especially for complex IPs or third-party tool integration.
Mastering these options helps designers fine-tune simulations, resolve issues efficiently, and ensure accurate DUT-testbench interaction.
Guidance on the commands options and switches on VCS and Xcelium

Structure for Invoking VCS or XRUN commands:
            VCS Syntax:      vcs [options] <source_files> -o [output_executable]
            XRUN Syntax:  xrun [options] <source_files> <output_files>

Options

VCS

Xcelium

64 bit compilation

-full64

-64bit

SystemVerilog support

-sverilog

-sv

Debug Access

-debug_access+all

-access +rwc

Timescale

-timescale=1ns/1ps

-timescale 1ns/1ps

Library Extensions (.v, .vh, .vhl, .sv, .svh)

+libext+.v+.vh+.vhl+.sv+.svh

-libext .v,.vh,.vhl,.sv,.svh

Work library

-work <library_name>

-worklib <library_name>

Output Directory

-o <path>/<name>_simv

 

Log File

-l <logfile>

-l <logfile>

Graphical User Interface

<only applied after VCS>
$ simv -gui

-gui


General Structure of invoking vcs or xrun commands including command options and switches

Commands

VCS

Xcelium

Structure:

Command \

Compile \

Elaboration \

Output \

Logs

vcs \

-full64 -sverilog -debug_access+all -timescale=1ns/1ps +libext+.v+.sv \

-o <user_defined>/simv \

-l <user_defined>/output.log \

 

<Method 1: start of comment>

Copy and paste the synopsys_sim.setup created by cmpl_libs to the simulation directory.

<Method 1: end of comment>

 

<Method 2: start of comment>  This part can be modified depending on the method you use

+incdir+<radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/ \

+incdir+<radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/ \

+incdir+<radiant_installation>/<radiant_version>/ip/pmi/ \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/<device_architecture>.f \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/<uaplatform | applatform>.f \

-f <radiant_installation>/<radiant_version>/ip/pmi/pmi.f \

<Method 2: end of comment>

 

<header_files> \

<design_files> \

<testbench_files> \

-top <name_of_top_file>

xrun \

-64bit -sv -timescale 1ns/1ps -libext .v,.sv \

-l <user_defined>/output.log \

-gui \

 

<Method 1: start of comment>

-reflib <target_path>/<device>

<Method 1: end of comment>

 

<Method 2: start of comment>  This part can be modified depending on the method you use

-incdir <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/ \

-incdir <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/ \

-incdir <radiant_installation>/<radiant_version>/ip/pmi/ \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<device_architecture>/<device_architecture>.f \

-f <radiant_installation>/<radiant_version>/cae_library/simulation/verilog/<uaplatform | applatform>/<uaplatform | applatform>.f \

-f <radiant_installation>/<radiant_version>/ip/pmi/pmi.f \

<Method 2: end of comment>

 

<header_files> \

<design_files> \

<testbench_files> \

-top <name_of_top_file>


Common Errors and Pitfalls

• Error: Unsupported option -64!  - This issue typically occurs with cmpl_libs versions earlier than R2025.1. For Xcelium, if xrun is executed without the -64bit flag, then cmpl_libs should also omit the -64 option. Conversely, if xrun includes -64bit, then cmpl_libs must also use -64. Starting with R2025.1, it's advisable to always include the -64 option. Moreover, this flag is required for Avant and other newer Lattice FPGA families.
• Error: Unresolved modules …/ip/pmi/pmi_ram_dq.v, 104 This typically happens when PMI source files are added without compiling the entire pmi/ directory. To resolve this, ensure the pmi directory is included using +incdir or -incdir (depending on whether you're using VCS or Xcelium), and also use the -y option to compile the directory.
• Error: Soruce file cannot be openedThis usually indicates that the file is either missing or the specified path is incorrect.
• Error: Identifier not declaredThis is often triggered when using the -v2k_generate switch, which enforces stricter scoping rules.
• Error: Cross-referenced module error in the tb_top.vThis is commonly due to the absence of a GSR instantiation in the top-level module. For Avant devices, it is recommended to verify and include a GSRA instance in the top module.

For further insights into frequent issues with third-party simulation tools, refer to FPGA-AN-02084 > Common Pitfalls.