Vega ver1.2.0 released:
-
Feature enhancement:
- Fine-grained network search space: The network search space can be freely defined, and rich network architecture parameters are provided for use in the search space. The network architecture parameters and model training hyperparameters can be searched at the same time, and the search space can be applied to Pytorch, TensorFlow and MindSpore.
-
New algorithm:
- NAGO: Neural Architecture Generator Optimization: An Hierarchical Graph-based Neural Architecture Search Space
-
Community Contributors: Chen Bo, cndylan, hasanirtiza, IlyaTrofimov, Lzc06, marsggbo, mengzhibin, qixiuai, SHUHarold, sptj.
Vega is an AutoML algorithm tool chain developed by Noah's Ark Laboratory, the main features are as follows:
- Full pipeline capailities: The AutoML capabilities cover key functions such as Hyperparameter Optimization, Data Augmentation, Network Architecture Search (NAS), Model Compression, and Fully Train. These functions are highly decoupled and can be configured as required, construct a complete pipeline.
- Industry-leading AutoML algorithms: provides Noah's Ark Laboratory's self-developed industry-leading algorithm(Benchmark) and Model Zoo to download the State-of-the-art (SOTA) model.
- Fine-grained network search space: The network search space can be freely defined, and rich network architecture parameters are provided for use in the search space. The network architecture parameters and model training hyperparameters can be searched at the same time, and the search space can be applied to Pytorch, TensorFlow and MindSpore.
- High-concurrency neural network training capability: Provides high-performance trainers to accelerate model training and evaluation.
- Multi-Backend: PyTorch, TensorFlow, MindSpore(trial)
Category | Algorithm | Description | reference |
---|---|---|---|
NAS | CARS: Continuous Evolution for Efficient Neural Architecture Search | Structure Search Method of Multi-objective Efficient Neural Network Based on Continuous Evolution | ref |
NAS | NAGO: Neural Architecture Generator Optimization | An Hierarchical Graph-based Neural Architecture Search Space | ref |
NAS | SR-EA | An Automatic Network Architecture Search Method for Super Resolution | ref |
NAS | ESR-EA: Efficient Residual Dense Block Search for Image Super-resolution | Multi-objective image super-resolution based on network architecture search | ref |
NAS | Adelaide-EA: SEGMENTATION-Adelaide-EA-NAS | Network Architecture Search Algorithm for Image Segmentation | ref |
NAS | SP-NAS: Serial-to-Parallel Backbone Search for Object Detection | Serial-to-Parallel Backbone Search for Object Detection Efficient Search Algorithm for Object Detection and Semantic Segmentation in Trunk Network Architecture | ref |
NAS | SM-NAS: Structural-to-Modular NAS | Two-stage object detection architecture search algorithm | Coming soon |
NAS | Auto-Lane: CurveLane-NAS | An End-to-End Framework Search Algorithm for Lane Lines | ref |
NAS | AutoFIS | An automatic feature selection algorithm for recommender system scenes | ref |
NAS | AutoGroup | An automatically learn feature interaction for recommender system scenes | ref |
Model Compression | Quant-EA: Quantization based on Evolutionary Algorithm | Automatic mixed bit quantization algorithm, using evolutionary strategy to quantize each layer of the CNN network | ref |
Model Compression | Prune-EA | Automatic channel pruning algorithm using evolutionary strategies | ref |
HPO | ASHA: Asynchronous Successive Halving Algorithm | Dynamic continuous halving algorithm | ref |
HPO | TPE: Tree-structured Parzen Estimator Approach | A hyperparameter optimization Algorithm Based on Tree - Structured Parzen Estimation | ref |
HPO | BO: Bayesian Optimization | Bayesian optimization algorithm | ref |
HPO | BOHB: Hyperband with Bayesian Optimization | Hyperband with Bayesian Optimization | ref |
HPO | BOSS: Bayesian Optimization via Sub-Sampling | A universal hyperparameter optimization algorithm based on Bayesian optimization framework for resource-constraint hyper-parameters search | ref |
Data Augmentation | PBA: Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules | Data augmentation based on PBT optimization | ref |
Data Augmentation | CycleSR: Unsupervised Image Super-Resolution with an Indirect Supervised Path | Unsupervised style migration algorithm for low-level vision problem. | ref |
Fully Train | Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks | Neural network training (regularization) based on disturbance of feature map | ref |
Fully Train | Circumventing Outliers of AutoAugment with Knowledge Distillation | Joint knowledge distillation and data augmentation for high performance classication model training, achieved 85.8% Top-1 accuracy on ImageNet 1k | Coming soon |
Install Vega and the open source softwares that Vega depends on:
pip3 install --user noah-vega
python3 -m vega.tools.install_pkgs
For more detail, please refer installation guide. If you want to deploy Vega in local cluster, see the deployment guide .
The Vega is highly modularized. You can configure the search space, search algorithm in a pipeline way. To run the Vega application is to load the configuration file and complete the AutoML process based on the configuration. Vega provides detailed operation examples for your reference. For details, see the examples . Example of running CARS algorithm:
cd examples
python3 ./run_pipeline.py ./nas/cars/cars.yml -b pytorch
Therefore, before using the Vega, you need to fully understand the meaning of the configuration items. For details, see the Configuration Guide.
Note:
Before running an example, you need to configure the directory where the dataset and pre-trained models are located in the algorithm configuration file. Please refer to Example Reference .
The Vega framework components are decoupled, and each functional component is combined using the registration mechanism to facilitate function and algorithm extension. For details about the Vega architecture and main mechanisms, see the Developer Guide .
In addition, you can refer to the Quick Start Guide to implement a simple network search function and quickly enter the Vega application development through practice.
During the development of the Vega application, the first problem is how to introduce the service data set to the Vega application. For details, see the Dataset Guide .
For different algorithms, you can refer doc Algorithm Development Guide . You can add the new algorithm to Vega step by step based on the example provided in this document.
In most Automl algorithms, the search space is closely related to the network. We try to unify the definition of the search space so that the same search space can adapt to different search algorithms. This is called the Fine-Grained Search Space Guide . Welcome to try it.
Of course, this document cannot solve all the problems. If you have any questions, please feel free to provide feedback through the issue. We will reply to you and solve your problems in a timely manner.
object | refrence |
---|---|
User | Install Guide, Deployment Guide, Configuration Guide, Examples, Evaluate Service |
Developer | Developer Guide, Quick Start Guide, Dataset Guide, Algorithm Development Guide, Fine-Grained Search Space Guide |
For common problems and exception handling, please refer to FAQ.
@misc{wang2020vega,
title={VEGA: Towards an End-to-End Configurable AutoML Pipeline},
author={Bochao Wang and Hang Xu and Jiajin Zhang and Chen Chen and Xiaozhi Fang and Ning Kang and Lanqing Hong and Wei Zhang and Yong Li and Zhicheng Liu and Zhenguo Li and Wenzhi Liu and Tong Zhang},
year={2020},
eprint={2011.01507},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Welcome to use Vega. If you have any questions, ask for help, fix bugs, contribute algorithms, or improve documents, submit the issue in the community. We will reply to and communicate with you in a timely manner. Welcome to join our QQ chatroom (Chinese): 833345709.