-
Notifications
You must be signed in to change notification settings - Fork 238
Bringup and Verification
MIAOW was developed using Synopsys VCS 2012. As such the majority of the infrastructure for running tests and analysis is built around VCS. When building or running the tests, it is assumed that the user has sourced the environment script that comes with a VCS install. The build also makes use of PERL for template expansion of certain files. Building of MIAOW has been tested on Linux systems only.
If you encounter difficulties building MIAOW with a different version of VCS or if you wish to develop build infrastructure for a different toolchain, we would welcome collaboration and patches.
####Files Very briefly, the following maps a few of the top level directories/Verilog files with the architecture diagram of the compute unit. For a lower level and more detailed explanation of the architecture, please refer to the MIAOW Architecture Whitepaper.
The compute_unit directory provides the wrapper for everything seen in the diagram save for the instruction buffer, LDS, and GPU memory. While there is an instr_buffer directory, it is primarily for simulation purposes and is one of the modules that would need technology specific replacements. The wavepool directory is effectively self-explanatory in what it maps to. The Decode and Schedule stage is composed of the decode, issue, and exec directories. Decode and issue are more or less self-explanatory. The exec module is used to generate masks to indicate which of the ALUs in the vector ALU will actually have a thread running on them.
The ALUs themselves are broken down even further. The control logic is located in their respective folders, salu for the scalar ALU and simd and simf for the two vector ALU vairants, one supporting integer and the other supporting floating point operations. The actual ALU implementation shared by all of them however is located in the alu directory. The ALU module is a parametrized implementation that changes depending on what the instantiating statement sets.
The remaining memory resources, the scalar and vector register files and the load store unit, are implemented in the respective sgpr, vgpr, and lsu directories.
Of the remaining directories, the following is a quick rundown.
- common - Shared logic used by several modules.
- fpga - FPGA specific logic.
- memory - Software based memory implementation used for simulation purposes.
- rfa - Register access mediator.
- tracemon - Testing harness used to generate instruction traces.
####Building MIAOW Dependencies involved:
- PERL
- Synopsys VCS Simulator
The following are the step by step instructions for building MIAOW and running various unit tests and benchmarks.
- Clone MIAOW to a local directory. We will reference it as
$TOP_DIR
in the following steps. - Go to the testbench folder
cd $TOP_DIR/src/verilog/tb
- Source the environment script of a VCS install.
Ex:
source setup_synopsys
(setup_synopsis is the environment script for VCS) -
make clean
- Ensure a clean working directory. -
make build
- Creates verilog files from .vp files in$TOP_DIR/src/verilog/rtl/<module>
. Creates build directory with verilog files under each<module>
. -
make compile
- Compiles all the verilog files and creates a vcs executable ./simv in the tb dir. If there is no file named simv, then there was an error in the compilation process. Check the build logs to determine the cause and resolve it before trying again.
MIAOW comes with a collection of unit tests and benchmarks. The unit tests are for verifying functional correctness and conformance with the Southern Islands instructions. The benchmarks are a collection of programs drawn from the AMD APP SDK as well as the rodinia suite. Included with the benchmarks and unit tests are instruction traces and memory dumps generated by Multi2Sim. These traces are the references used to verify functional correctness of MIAOW. How to generate these reference traces are detailed in the Extension and Verification walkthrough.
Please note that only the unit tests are included with the MIAOW repository. The benchmarks are a separate download due to their sheer size, located here: Benchmarks. To then run the benchmarks, the contents of the zip should be put into the $TOP_DIR/src/sw directory in a benchmarks folder.
-
The steps here assume that the instruction traces and configuration files are already generated and are available at:
$TOP_DIR/src/sw/{benchmarks, miaow_unit_tests}
. - It is also assumed that you have already compiled the verilog files and have a vcs executable ready.
Following steps will walk you through to run unit tests and benchmarks in MIAOW:
-
cd $TOP_DIR/src/verilog/tb
-
The run.pl file in the tb directory is the script to run tests and benchmarks on MIAOW GPU. Below is how to use the script to specify the test from a specified category and produce the results in an OUT_DIR.
run.pl -r <TEST_GROUP> -t <TEST_NAME> -o <OUT_DIR>
where:
TEST_GROUP - is the test group present at$TOP_DIR/src/sw/
hierarchy
0 - benchmarks
1 - miaow_unit_tests
2 - rodiniaTEST_NAME: regular expression that matches the test name. If you want only one test, just type the name of the test. If you want all the tests to be executed, type *.
Ex: "Binary" executes the BinarySearch benchmark
"test_00" executes all the unit tests whose name starts with test_00
[See$TOP_DIR/src/sw/{benchmarks, miaow_unit_tests}
directory for other test names]OUT_DIR: name of the directory where the results will be written. After the test finishes, the
OUT_DIR
will be in this path:$TOP_DIR/src/verilog/tb/results/<OUT_DIR>
folder.
Example: run.pl -r 0 -t BinarySearch -o bs_1
- Runs binary search and dumps the results in $TOP_DIR/src/verilog/tb/results/bs_1
folder.
Use run.pl -h
for details about other options, including -w for dumping waveforms.
The content of the $TOP_DIR/src/verilog/tb/results/<OUT_DIR>
is explained below:
- Trace comparison summary and to check whether the test/s FAILED or PASSED can be found at:
results/<OUT_DIR>/summary.txt
- Test run log can be found at:
results/<OUT_DIR>/<test_name>/run.log
- Generated trace with list of instructions can be found at:
results/<OUT_DIR>/<test_name>/tracemon.out
- Reference (Golden) trace with list of instructions can be found at:
results/<OUT_DIR>/<test_name>/<test_name>_trace
- If there are multiple kernels in a test, then traces will be generated in each kernel folder i.e,
results/<OUT_DIR>/<test_name>/kernel_X/tracemon_X_X_X.out
where, kernel_X refers to kernel_number and tracemon_X_X_X refers to {kernel_number, workgroup_number and wavefront_number respectively.} - Generated trace with only opcodes can be found at:
results/<OUT_DIR>/<test_name>/<test_name>/kernel_X/<test_name>_trace_X_X_X.verilog
- Reference (Golden) trace with only opcodes can be found at:
results/<OUT_DIR>/<test_name>/<test_name>/kernel_X/<test_name>_trace_X_X_X.gold
- To do a diff and find out, where the test is failing, do:
diff <test_name>_trace_X_X_X.gold <test_name>_trace_X_X_X.verilog
Opening waveforms using vcs DVE for debugging:
Running benchmarks and tests produce variety of outputs, including instruction traces and waveforms. To view the waveform using VCS, follow the steps below:
- Make sure the
vcs dve
is in your $PATH. cd $TOP_DIR/src/verilog/tb/results/<OUT_DIR>/<test_name>
- make browse
###Synthesis - TO BE UPDATED SOON (By End October 2014)
To perform hardware analysis such as area and power, a separate set of scripts is needed.