Skip to content

Implementation of the paper "Improving Variable Orderings of Approximate Decision Diagrams using Reinforcement Learning"

License

Notifications You must be signed in to change notification settings

corail-research/learning-to-bound

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

learning-to-bound

Implementation of the paper "Improving Variable Orderings of Approximate Decision Diagrams using Reinforcement Learning".

A challenge in the design of scalable algorithms is the efficient computation of tight optimization bounds. Decision diagrams (DDs) provide a novel and flexible mechanism for obtaining high-quality and flexible bounds. This work provides a generic framework based on deep reinforcement learning for improving the optimization bounds of dynamic programs, thanks to DD technology. This is done by learning appropriate variable orderings that are shown experimentally to tighten the bounds proven by restricted and relaxed DDs. A DD-based branch-and-bound using tight bounds generated by the trained models is presented and applied to the maximum independent set problem.

Content of the repository

.
└── learning-DD/ # learning script
	├── graphnn/ # graph neural network library
	├── models/  
		└── misp-random/ # implementation for the maximum independent set
		      ├── config_run.sh  # training configuration file
		      ├── run_misp_training_random.sh  # training launcher
		      ├── code/
		      ├── results-local/ # models generated along with learning curves   
└── indepset/ # branch-and-bound script
	├── instances/ # maximum independent set instances
	├── sources/  
		├── src/ # branch-and-bound sources
		├── model/ # models used during branch-and-bound
		├── run_bb.sh # branch-and-bound launcher

Installation

1. Importing the repository

git clone https://github.com/corail-research/learning-to-bound.git

2. Graphnn library

This library has been developped by Dai et al. and we made no modification on it. Please see https://github.com/Hanjun-Dai/graphnn for the instructions about how to build it.

3. Building

Build the dynamic library:

cd learning-DD/models/misp-random/code
cp Makefile_example Makefile
make
cd indepset/sources/code/
cp Makefile_example Makefile
make

4. Virtual environment

  1. We use conda to manage virtual environments (https://conda.io/docs/index.html).

  2. Creation of a virtual environment:

conda create -n learning-to-bound-env python=3.6
conda install --name learning-to-bound-env numpy networkx matplotlib
conda activate learning-to-bound-env
  1. Once done, you can deactivate the virtual environment with:
conda deactivate 

Getting Started

We provide a GNN trainer for tight DD generation and a branch-and-bound solver that uses trained models, the basic usage of which are described below.

1. Training a model

cd learning-DD/models/misp-random/
./run_misp_training_random.sh # training with config_run parameters

2. Solving a MISP instance

cd indepset/sources/code/
./run_test.sh # example on a random graph with a pre-trained model

About

Implementation of the paper "Improving Variable Orderings of Approximate Decision Diagrams using Reinforcement Learning"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published