forked from mlr-org/mlrMBO
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.Rmd
126 lines (92 loc) · 5.24 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
output: github_document
---
```{r setup, include=FALSE}
library(knitr)
library(gifski)
opts_knit$set(upload.fun = imgur_upload, base.url = NULL) # upload all images to imgur.com
opts_chunk$set(fig.width=5, fig.height=5, cache=TRUE)
```
# mlrMBO
<!-- Please edit README.Rmd !-->
Model-based optimization with [mlr](https://github.com/mlr-org/mlr/).
[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/mlrMBO)](https://cran.r-project.org/package=mlrMBO)
[![Travis build status](https://img.shields.io/travis/mlr-org/mlrMBO/master?logo=travis&style=flat-square&label=Linux)](https://travis-ci.org/mlr-org/mlrMBO)
[![AppVeyor build status](https://img.shields.io/appveyor/ci/mlr-org/mlrMBO?label=Windows&logo=appveyor&style=flat-square)](https://ci.appveyor.com/project/mlr-org/mlrMBO)
[![Coverage Status](https://img.shields.io/codecov/c/github/mlr-org/mlrMBO/master.svg)](https://codecov.io/github/mlr-org/mlrMBO?branch=master)
[![Monthly RStudio CRAN Downloads](https://cranlogs.r-pkg.org/badges/mlrMBO)](https://CRAN.R-project.org/package=mlrMBO)
* [Documentation](https://mlr-org.github.io/mlrMBO/)
* [Issues, Requests and Bug Tracker](https://github.com/mlr-org/mlrMBO/issues)
# Installation
We recommend to install the official release version:
```{r, eval = FALSE}
install.packages("mlrMBO")
```
For experimental use you can install the latest development version:
```{r, eval = FALSE}
remotes::install_github("mlr-org/mlrMBO")
```
# Introduction
```{r animation, message = FALSE, warning = FALSE, echo=FALSE, eval=TRUE, fig.width=7, fig.height=4, animation.hook='gifski'}
set.seed(2)
library(ggplot2)
library(mlrMBO)
library(animation)
configureMlr(show.learner.output = FALSE)
pause = interactive()
set.seed(1)
fn = makeCosineMixtureFunction(1)
obj.fun = convertToMinimization(fn)
# mbo control with defaults
ctrl = makeMBOControl()
ctrl = setMBOControlTermination(ctrl, iters = 10L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 500L, opt.restarts = 1L)
design = generateDesign(5L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRun(obj.fun, design = design,
control = ctrl, points.per.dim = 1000, show.info = TRUE)
for(i in 1:10) {
plotExampleRun(run, iters = i, pause = pause, densregion = TRUE, gg.objects = list(theme_bw()))
}
```
`mlrMBO` is a highly configurable R toolbox for model-based / Bayesian optimization of black-box functions.
Features:
* EGO-type algorithms (Kriging with expected improvement) on purely numerical search spaces, see [Jones et al. (1998)](http://link.springer.com/article/10.1023/A:1008306431147)
* Mixed search spaces with numerical, integer, categorical and subordinate parameters
* Arbitrary parameter transformation allowing to optimize on, e.g., logscale
* Optimization of noisy objective functions
* Multi-Criteria optimization with approximated Pareto fronts
* Parallelization through multi-point batch proposals
* Parallelization on many parallel back-ends and clusters through [batchtools](https://github.com/mllg/batchtools) and [parallelMap](https://github.com/berndbischl/parallelMap)
For the *surrogate*, `mlrMBO` allows any regression learner from [`mlr`](https://github.com/mlr-org/mlr), including:
* Kriging aka. Gaussian processes (i.e. `DiceKriging`)
* random Forests (i.e. `randomForest`)
* and many more...
Various *infill criteria* (aka. _acquisition functions_) are available:
* Expected improvement (EI)
* Upper/Lower confidence bound (LCB, aka. statistical lower or upper bound)
* Augmented expected improvement (AEI)
* Expected quantile improvement (EQI)
* API for custom infill criteria
Objective functions are created with package [smoof](https://github.com/jakobbossek/smoof), which also offers many test functions for example runs or benchmarks.
Parameter spaces and initial designs are created with package [ParamHelpers](https://github.com/berndbischl/ParamHelpers).
# How to Cite
Please cite our [arxiv paper](https://arxiv.org/abs/1703.03373) (Preprint).
You can get citation info via `citation("mlrMBO")` or copy the following BibTex entry:
```bibtex
@article{mlrMBO,
title = {{{mlrMBO}}: {{A Modular Framework}} for {{Model}}-{{Based Optimization}} of {{Expensive Black}}-{{Box Functions}}},
url = {http://arxiv.org/abs/1703.03373},
shorttitle = {{{mlrMBO}}},
archivePrefix = {arXiv},
eprinttype = {arxiv},
eprint = {1703.03373},
primaryClass = {stat},
author = {Bischl, Bernd and Richter, Jakob and Bossek, Jakob and Horn, Daniel and Thomas, Janek and Lang, Michel},
date = {2017-03-09},
}
```
Some parts of the package were created as part of other publications.
If you use these parts, please cite the relevant work appropriately:
* Multi-point proposals, including the new multi-objective infill criteria: [MOI-MBO: Multiobjective Infill for Parallel Model-Based Optimization](https://doi.org/10.1007/978-3-319-09584-4_17)
* Multi-objective optimization: [Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark](https://doi.org/10.1007/978-3-319-15934-8_5)
* Multi-objective optimization with categorical variables using the random forest as a surrogate: [Multi-objective parameter configuration of machine learning algorithms using model-based optimization](https://doi.org/10.1109/SSCI.2016.7850221)