TW Experimentation is a library to design experiments, check data, run statistical tests and make decisions
The experimentation library can help you with:
- Sample Size Calculator link - here you can estimate how many observations you need to perform an experiment on the given parameters.
- Integrity checks + Evaluation link - here you can check important plots to understand your data better and then run statistical evaluation. Our library will automatically detect if it is a binary metric or continuous and then apply specific method as well as multiple hypothesis correction if needed.
- Evaluation (Bayesian A/B testing) link - here you can apply bayesian evaluation on your dataset
By using TW Experimentation you can design your experiments, choose sample size, evaluate the experiment.
You can use various statistical tests for the metrics provided. (Frequentist / Bayesian) For this goal you can use jupyter notebooks or streamlit app with user-friendly interface. Using this repo you can:
- Run frequentist evaluation
- Run bayesian evaluation
- Apply multiple hypothesis correction
- Remove outliers
- Check different plots
- Run segmentation to check evaluation on specific segments as well as find unusual segments using our another tool: wise-pizza
- You can easily install this repo using "pip"
pip install tw-experimentation
Then you can just use all functionality.
For running streamlit app please just open the terminal and run:
run_tw_experimentation_streamlit
- You can install the package via the dependency manager poetry after cloning/git pull/download it as a zip from this repository.
To do so, clone the repository by running
git clone [email protected]:transferwise/tw-experimentation.git
from terminal. To set up poetry, run
make set-up-poetry-mac
for mac (or linux) and
make set-up-poetry-windows
for windows. Then, run
make run-streamlit-poetry
from the root of the package folder.
- Alernative: TW Experimentation requires the following libraries to work which you can find in the .yml file. To install requirements please make sure you have installed the package manager Anaconda and then run the following commands in the terminal:
conda env create -n <my_env> -f envs/environment.yml
conda activate <my_env>
If you are using Windows, please do these additional steps:
- pick a jaxlib-0.3.7 wheel from here https://whls.blob.core.windows.net/unstable/index.html and install it manually (pip install <wheel_url>)
- Install jax==0.3.7
If you have any problems with jax on Mac, please do the following:
pip uninstall jax jaxlib
conda install -c conda-forge jaxlib
conda install -c conda-forge jax
Make sure you have followed the installation instructions.
You can use the jupyter notebooks 1_pre_experiment.ipynb
or 2_integrity_checks + evaluation.ipynb
for experiments design and evaluation.
The tw experimentation package can be used for different things, for example for analyzing results:
df = pd.read_csv('experiment.csv')
ed = ExperimentDataset(
data=df,
variant="T",
targets=['conversion', 'revenue'],
date='trigger_dates',
pre_experiment_cols=None,
n_variants=2,
)
ed.preprocess_dataset(remove_outliers=True)
This code will generate the data model for experiment analysis
And then you can run evaluation
evaluation = FrequentistEvaluation(experiment_dataset)
evaluation.start()
For running streamlit app please just open the terminal and run after "pip install" section:
run_tw_experimentation_streamlit
Or:
Open terminal and navigate to the repository.
Then navigate to the folder ./tw_experimentation/streamlit
.
Now run the command streamlit run Main.py
and the app should open in your browser.
For testing, you can load the test dataset from data/test_data.csv
. You should either set the fields as shown in the
screenshot below; or just click on the "Load configuration from json" section and load the whole
configuration from the file data/test_config.json
.
Tip on navigation:
ls
- show files in current directory
pwd
- print current directory address
cd
- change directory, e.g. cd ./tw_experimentation/streamlit
We use PyTest for testing. If you want to contribute code, make sure that the tests in tests/ run without errors.