Skip to content

Bringup and Verification

zwabbit edited this page Sep 15, 2014 · 17 revisions

MIAOW was developed using Synopsys VCS. As such the majority of the infrastructure for running tests and analysis is built around VCS. When building or running the tests, it is assumed that the user has sourced the environment script that comes with a VCS install. The build also makes use of PERL for template expansion of certain files. Building of MIAOW has been tested on Linux systems only.

####Building MIAOW

  • PERL
  • Synopsys VCS Simulator

Following are the detailed steps one need to follow to build the MIAOW and its associated testbench:

  1. Once the MIAOW repo is cloned to your local machine, $TOP_DIR is the top directory of the MIAOW folder.
  2. Go to the testbench folder of the repo
    cd $TOP_DIR/src/verilog/tb
  3. source the environment script that comes with a VCS install.
    Ex: source setup_synopsys (setup_synopsis is the environment script for VCS)
  4. make clean - To undo the changes for the build if at all done before
  5. make build - Creates verilog files from .vp files in $TOP_DIR/src/verilog/rtl/<module>. Creates build directory with verilog files under each <module>.
  6. make compile - Compiles all the verilog files and creates an vcs executable ./simv in tb dir. If there is no file generated named simv, then there is an error in the compilation process and check for RTL erros in the log.

Running VCS simulations (unit tests and benchmarks):

The unit tests are focused on verifying the functional correctness of the implemented Southern Islands Instructions. Benchmarks include standard OpenCL AMD APP kernels and some rodinia suite workloads(Supports only 5 as of now). The golden traces for these unit tests and benchmarks are generated in prior using a functional emulator from Mult2sim (www.multi2sim.org). Detailed explanation on generation of instruction traces, configuration files instruction and memory dumps for unit tests and benchmarks will be released in future documentation. As of now, the golden traces for unit tests and benchmarks are already included in the MIAOW release.

  • The steps here assume that the instruction traces and configuration files are already generated and are available at:
    $TOP_DIR/src/sw/{benchmarks, miaow_unit_tests}.
  • It is also assumed that you have already compiled the verilog files and has an vcs executable i.e. simv exists.

Following steps will walk you through to run unit tests and benchmarks in MIAOW:

  1. cd $TOP_DIR/src/verilog/tb

  2. run.pl in the tb folder is the script to run tests and benchmarks on MIAOW GPU. Below is the invokation of the script to specify the test from a test group/benchmarks and produce the results in an OUT_DIR.

    run.pl -r <TEST_GROUP> -t <TEST_NAME> -o <OUT_DIR>

    where:
    TEST_GROUP - is the test group present at $TOP_DIR/src/sw/ hierarchy
    0 - benchmarks
    1 - miaow_unit_tests
    2 - rodinia

    TEST_NAME: regular expression that matches the test name. If you want only one test, just type the name of the test. If you want all the tests to be executed, type *.
    Ex:
    "Binary" executes the BinarySearch benchmark
    "test_00" executes all the unit tests whose name starts with test_00
    [See $TOP_DIR/src/sw/{benchmarks, miaow_unit_tests} folder to have a look on all test names]

    OUT_DIR: name of the directory where the results will be written. After the test finishes, the OUT_DIR will be in this path: $TOP_DIR/src/verilog/tb/results/<OUT_DIR> folder.

Examples: run.pl -r 0 -t BinarySearch -o bs_1
Runs binary search and dumps the results in $TOP_DIR/src/verilog/tb/results/bs_1 folder.

Use run.pl -h for details about other options, including -w for dumping waveforms.

The contents of the $TOP_DIR/src/verilog/tb/results/<OUT_DIR> is explained below:

  • Trace comparison summary and to check whether the test/s FAILED or PASSED can be found at: results/<OUT_DIR>/summary.txt
  • Test run log can be found at: results/<OUT_DIR>/<test_name>/run.log
  • Generated trace with list of instructions can be found at: results/<OUT_DIR>/<test_name>/tracemon.out
  • Reference (Golden) trace with list of instructions can be found at: results/<OUT_DIR>/<test_name>/<test_name>_trace
  • If there are multiple kernels in a test, then traces will be generated in each kernel folder i.e, results/<OUT_DIR>/<test_name>/kernel_X/tracemon_X_X_X.out
    where, kernel_X refers to kernel_number and tracemon_X_X_X refers to {kernel_number, workgroup_number and wavefront_number respectively.}
  • Generated trace with only opcodes can be found at: results/<OUT_DIR>/<test_name>/<test_name>/kernel_X/<test_name>_trace_X_X_X.verilog
  • Reference (Golden) trace with only opcodes can be found at: results/<OUT_DIR>/<test_name>/<test_name>/kernel_X/<test_name>_trace_X_X_X.gold
  • To do a diff and find out, where the test is failing, do:
    diff <test_name>_trace_X_X_X.gold <test_name>_trace_X_X_X.verilog

Opening waveforms using vcs DVE for debugging:

  1. Make sure the vcs dve is in your $PATH.
  2. cd $TOP_DIR/src/verilog/tb/results/<OUT_DIR>/<test_name>
  3. make browse

###Synthesis - TO BE UPDATED SOON
To perform hardware analysis such as area and power, a separate set of scripts is needed.

Generating Multi2Sim traces - TO BE UPDATED SOON

The steps to generate golden reference traces from Multi2Sim simulator.

Clone this wiki locally