Training-Free Multi-Objective Evolutionary Neural Architecture Search via Neural Tangent Kernel and Number of Linear Regions
Ngoc Hoang Luong, Tu Do
In ICONIP 2021.
- Clone this repo:
git clone https://github.com/MinhTuDo/TF-MOENAS.git
cd TF-MOENAS
- Install dependencies:
pip install -r requirements.txt
- Follow the instructions here to install benchmark files for NAS-Bench-101.
- Follow the instructions here to install benchmark files for NAS-Bench-201.
- Optional: To evaluate IGD on the optimal front during a NAS run, for NAS-Bench-101, you need to download the pre-computed benchmark query data here and put it in the
data
folder. - Remember to properly set the benchmark paths in config files, default data path is ~/.torch.
# Baseline MOENAS
python search.py -cfg config/baseline_moenas-101.yml --n_evals 5000 --pop_size 50 --loops_if_rand 30 -sw --use_archive
# Training-free MOENAS
python search.py -cfg config/tf_moenas-101.yml --n_evals 5000 --pop_size 50 --loops_if_rand 30 -sw --use_archive
# Baseline MOENAS
python search.py -cfg config/baseline_moenas-201.yml --n_evals 3000 --pop_size 50 --loops_if_rand 30 -sw --use_archive
# Training-free MOENAS
python search.py -cfg config/tf_moenas-201.yml --n_evals 3000 --pop_size 50 --loops_if_rand 30 -sw --use_archive
To evaluate IGD score on pre-computed optimal front during the search, simply provide --eval_igd
flag.
For customized search, additional configurations can be modified through yaml config files in config
folder.
Code inspired from:
- NASBench: A Neural Architecture Search Dataset and Benchmark,
- NASBench-Pytorch,
- NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size,
- Pymoo: Multi-Objective Optimization in Python,
- Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective,
- Automated Deep Learning Projects (AutoDL-Projects)