Skip to content

Running Tests

Steven Herbst edited this page Oct 7, 2020 · 4 revisions

Here's how to run regression tests on the Stanford infrastructure:

  1. Install the "Miniconda Linux 64-bit" if you don't have it already (link here)
  2. Create and activate a conda environment for dragonphy:
conda create -n dragonphy python=3.7
conda activate dragonphy
  1. Upgrade pip to get rid of an annoying warning:
pip install -U pip
  1. Set up the CAD environment (assumes that you're using bash; steps copied from .buildkite/pipeline.yml):
source /cad/modules/tcl/init/bash
module load base xcelium dc_shell
export DW_TAP=/cad/synopsys/syn/L-2016.03-SP5-5/dw/sim_ver/DW_tap.v
export BUILD_VIEW=cpu
  1. Clone the dragonphy source if you haven't already
git clone https://github.com/StanfordVLSI/dragonphy2.git
cd dragonphy2
  1. Install dependencies (steps copied from regress.sh). If you already have Genesis2 or DaVE installed elsewhere, that's fine -- please just make sure they're up-to-date and that you have updated the environment variables as shown below. These environment variables can be defined in .bashrc so that you don't have to repeat these steps when you open a new terminal session.
# install Genesis2
git clone https://github.com/StanfordVLSI/Genesis2.git
export GENESIS_HOME=`realpath Genesis2/Genesis2Tools`
export PERL5LIB="$GENESIS_HOME/PerlLibs/ExtrasForOldPerlDistributions"
export PATH="$GENESIS_HOME/bin:$PATH"
export PATH="$GENESIS_HOME/gui/bin:$PATH"
/bin/rm -rf $GENESIS_HOME/PerlLibs/ExtrasForOldPerlDistributions/Compress

# install DaVE
git clone --single-branch --branch pwl_cos https://github.com/StanfordVLSI/DaVE.git
export mLINGUA_DIR=`realpath DaVE/mLingua`
  1. Install the DragonPHY package itself:
pip install -e .
  1. Build the chip design (things like the JTAG register map...).
python make.py --view cpu
  1. Install dependencies for testing:
pip install pytest
  1. Run all tests.
pytest tests/other_tests tests/cpu_block_tests tests/cpu_system_tests -s
  1. If you want to run a specific test, you can do something like this:
pytest tests/new_tests/GLITCH -s
Clone this wiki locally