- Replace default lengthscale priors by inverse-gamma distributions.
- Add the following command line flags, which allow the user to override the
prior parameters:
--gp-signal-prior-scale
for the scale of the signal prior.--gp-noise-prior-scale
for the scale of the noise prior.--gp-lengthscale-prior-lb
for the lower bound of the lengthscale prior.--gp-lengthscale-prior-ub
for the upper bound of the lengthscale prior.
- Add
--fast-resume
switch to the tuner, which allows instant resume functionality from disk (new default). - Fix the match parser producing incorrect results, when concurrency > 1 is used for playing matches.
- The distributed tuning framework is no longer deprecated.
- Add
--run-only-once
flag to distributed tuning client. If True, it will terminate after completing one job or immediately if no job is found. - Add
--skip-benchmark
flag to distributed tuning client. If True, it will skip the calibration of the time control, which involves running a benchmark for both engines. - Tuning server of the distributed tuning framework will now also save the optimizer object.
- Tuning server now also uses the updated pentanomial model including noise estimation.
warp_inputs
can now be passed via database to the tuning server.- Fix the server for distributed tuning not sorting the data by job id causing the model to be fit with randomly permuted scores.
- Fix the server for distributed tuning trying to compute the current optimum before a model has been fit.
- Print user facing scores using the more common Elo scale, instead of negative downscaled values used internally.
- Internal constants set to improved values.
- Always send
uci
first before sendingsetoption
commands to the engine.
- Fix incorrectly outputting the variance instead of the standard deviation for the estimated error around the score estimate.
- Fix a bug where the model was not informed about the estimated noise variance of the current match.
- Revert default acquisition function back to
"mes"
. - Remove noise from the calculation of the confidence interval of the optimum value.
- Log cutechess-cli output continuously.
- Add
"debug_mode"
parameter which will pass-debug
to cutechess-cli. - Add support for pondering using
engineX_ponder
. - Fix passing boolean UCI options correctly.
- Add support for input warping, allowing the tuner to automatically transform the data into a suitable form (internally).
- Improve default parameters to be slightly more robust for most use cases and be more in line with what a user might expect.
- Add confidence interval and standard error of the score of the estimated global optimum to the logging output
- Add support for time per move matches (option
st
in cutechess-cli). - Add support for timemargin parameter.
- Fix debug output being spammed by other libraries.
- Fix plots being of varying sizes dependent on their labels and ticks. This should make it easier to animate them.
- Add support for the new cutechess-cli 1.2.0 output format.
- Add support for confidence intervals of the optimum. By default a table of highest density intervals will be reported alongside the current optimum.
- Add support for parameter range reduction. Since this potentially requires discarding some of the data points, it will also save a backup.
- Change score calculation to be in logit/Elo space. This fixes problems with scores being compressed for very unevenly matched engines.
- Add new standalone tuning script. With this it is possible to tune parameters of an engine without having to set up the distributed tuning framework. Usage instructions and example configurations are included.
- Support for round-flat prior distributions
- Fix parsing of priors and benchmark results
- Completely new database implemented in SQLAlchemy.
- Pentanomial scoring of matches, accounting for the paired openings and different draw rates of time controls.
- Allow timed termination of the client by the option
--terminate-after
- Support for non-increment time controls
- Allow graceful termination of tuning-client using ctrl-c.
- Implement probabilistic load balancing support in the clients.
- Simplified tuning client tutorial and logging.
- First release on PyPI.