From 87b7a85120e2e414e1d72ae97591f765dba34aa6 Mon Sep 17 00:00:00 2001 From: Jaeyeong Yang Date: Tue, 30 Oct 2018 19:40:53 +0900 Subject: [PATCH] Update docs for 0.6.3 --- docs/authors.html | 22 ++++- docs/index.html | 98 ++++++++++++++------ docs/pkgdown.yml | 2 +- docs/reference/HDIofMCMC.html | 2 +- docs/reference/bandit2arm_delta.html | 8 +- docs/reference/bandit4arm_4par.html | 8 +- docs/reference/bandit4arm_lapse.html | 8 +- docs/reference/bart_par4.html | 12 +-- docs/reference/choiceRT_ddm.html | 12 +-- docs/reference/choiceRT_ddm_single.html | 10 +- docs/reference/choiceRT_lba.html | 12 +-- docs/reference/choiceRT_lba_single.html | 8 +- docs/reference/cra_exp.html | 2 +- docs/reference/cra_linear.html | 12 +-- docs/reference/dd_cs.html | 2 +- docs/reference/dd_cs_single.html | 12 +-- docs/reference/dd_exp.html | 2 +- docs/reference/dd_hyperbolic.html | 12 +-- docs/reference/dd_hyperbolic_single.html | 8 +- docs/reference/estimate_mode.html | 2 +- docs/reference/extract_ic.html | 2 +- docs/reference/gng_m1.html | 2 +- docs/reference/gng_m2.html | 2 +- docs/reference/gng_m3.html | 2 +- docs/reference/gng_m4.html | 2 +- docs/reference/hBayesDM-package.html | 2 +- docs/reference/igt_orl.html | 46 ++++----- docs/reference/igt_pvl_decay.html | 12 +-- docs/reference/igt_pvl_delta.html | 12 +-- docs/reference/igt_vpp.html | 8 +- docs/reference/index.html | 2 +- docs/reference/multiplot.html | 2 +- docs/reference/peer_ocu.html | 2 +- docs/reference/plot.hBayesDM.html | 2 +- docs/reference/plotDist.html | 7 +- docs/reference/plotHDI.html | 7 +- docs/reference/plotInd.html | 2 +- docs/reference/printFit.html | 2 +- docs/reference/prl_ewa.html | 2 +- docs/reference/prl_fictitious.html | 8 +- docs/reference/prl_fictitious_multipleB.html | 12 +-- docs/reference/prl_fictitious_rp.html | 8 +- docs/reference/prl_fictitious_rp_woa.html | 8 +- docs/reference/prl_fictitious_woa.html | 8 +- docs/reference/prl_rp.html | 2 +- docs/reference/prl_rp_multipleB.html | 8 +- docs/reference/pst_gainloss_Q.html | 8 +- docs/reference/ra_noLA.html | 2 +- docs/reference/ra_noRA.html | 2 +- docs/reference/ra_prospect.html | 12 +-- docs/reference/rdt_happiness.html | 12 +-- docs/reference/rhat.html | 2 +- docs/reference/ts_par4.html | 2 +- docs/reference/ts_par6.html | 2 +- docs/reference/ts_par7.html | 2 +- docs/reference/ug_bayes.html | 2 +- docs/reference/ug_delta.html | 2 +- docs/reference/wcs_sql.html | 2 +- 58 files changed, 268 insertions(+), 208 deletions(-) diff --git a/docs/authors.html b/docs/authors.html index 0ceadbaf..59c87e78 100644 --- a/docs/authors.html +++ b/docs/authors.html @@ -58,7 +58,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -92,9 +92,9 @@

Citation

-

Ahn W, Haines N, Zhang L (2017). +

Ahn W, Haines N and Zhang L (2017). “Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package.” -Computational Psychiatry, 1, 24–57. +Computational Psychiatry, 1, pp. 24–57. https://doi.org/10.1162/CPSY_a_00002.

@Article{hBayesDM,
@@ -123,6 +123,22 @@ 

Authors

Lei Zhang. Author.

+
  • +

    Harhim Park. Contributor. +

    +
  • +
  • +

    Jaeyeong Yang. Contributor. +

    +
  • +
  • +

    Dayeong Min. Contributor. +

    +
  • +
  • +

    Jethro Lee. Contributor. +

    +
  • diff --git a/docs/index.html b/docs/index.html index 093cec40..5e74dd68 100644 --- a/docs/index.html +++ b/docs/index.html @@ -10,7 +10,8 @@ - @@ -20,7 +21,7 @@ -
    +
    -
    - - - - - -
    +
    +
    -

    The hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks) is a user-friendly R package that offers hierarchical Bayesian analysis of various computational models on an array of decision-making tasks. The hBayesDM package uses Stan for Bayesian inference.

    -
    +

    hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks) is a user-friendly R package that offers hierarchical Bayesian analysis of various computational models on an array of decision-making tasks. hBayesDM uses Stan for Bayesian inference.

    +
    +

    +Getting Started

    +
    +

    +Prerequisite

    +

    To install hBayesDM, RStan should be properly installed before you proceed. For detailed instructions, please go to this link: https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started

    +

    For the moment, RStan requires you to specify that the C++14 standard should be used to compile Stan programs (based on this link):

    +
    Sys.setenv(USE_CXX14 = 1)
    +library("rstan") # observe startup messages
    +
    +
    +

    +Installation

    +

    hBayesDM can be installed from CRAN by running the following command in R:

    +
    install.packages('hBayesDM')  # Install hBayesDM from CRAN
    +

    We strongly recommend users to install hBayesDM from GitHub. All models in this GitHub version are precompiled, which saves time for compiling Stan models. However, it may cause some memory allocation issues on a Windows machine.

    +

    You can install the latest version from GitHub with:

    +
    # `devtools` is required to install hBayesDM from GitHub
    +if (!require(devtools)) install.packages("devtools")
    +devtools::install_github("CCS-Lab/hBayesDM")
    +
    + +
    +

    -Installation

    -

    (For Windows users) First download and install Rtools from this link: http://cran.r-project.org/bin/windows/Rtools/. For detailed instructions, please go to this link: https://github.com/stan-dev/rstan/wiki/Installing-RStan-on-Windows.

    -

    You need to install the hBayesDM from CRAN. The GitHub version precompiles all Stan models, which makes it faster to start MCMC sampling. But it may cause some memory allocation issues on a Windows machine.

    -

    (For Mac/Linux users) If you are a Mac user, make sure Xcode is installed. We strongly recommend users install this GitHub version. The GitHub version in the master repository is identical to the CRAN version, except that all models are precompiled in the GitHub version, which saves time for compiling Stan models.

    -

    You can install the latest version from github with:

    - -

    Please go to hBayesDM Tutorial for more information about the package.

    -

    If you encounter a problem or a bug, please use our mailing list: https://groups.google.com/forum/#!forum/hbayesdm-users, or you can directly create an issue on GitHub.

    +Citation +

    If you used hBayesDM or some of its codes for your research, please cite this paper:

    +
    +

    Ahn, W.-Y., Haines, N., & Zhang, L. (2017). Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package. Computational Psychiatry, 1, 24-57. https://doi.org/10.1162/CPSY_a_00002.

    +
    +

    or for BibTeX:

    +
    @article{hBayesDM,
    +  title = {Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the {hBayesDM} Package},
    +  author = {Ahn, Woo-Young and Haines, Nathaniel and Zhang, Lei},
    +  journal = {Computational Psychiatry},
    +  year = {2017},
    +  volume = {1},
    +  pages = {24--57},
    +  publisher = {MIT Press},
    +  url = {https://doi.org/10.1162/CPSY_a_00002},
    +}
    - diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml index b80bf0b3..f279cc12 100644 --- a/docs/pkgdown.yml +++ b/docs/pkgdown.yml @@ -1,4 +1,4 @@ -pandoc: 2.2.3.2 +pandoc: 1.19.2.1 pkgdown: 1.1.0 pkgdown_sha: ~ articles: [] diff --git a/docs/reference/HDIofMCMC.html b/docs/reference/HDIofMCMC.html index fa847516..2db111e3 100644 --- a/docs/reference/HDIofMCMC.html +++ b/docs/reference/HDIofMCMC.html @@ -62,7 +62,7 @@ hBayesDM - 0.6.0 + 0.6.3
    diff --git a/docs/reference/bandit2arm_delta.html b/docs/reference/bandit2arm_delta.html index b2b7f1e4..cc9edfc8 100644 --- a/docs/reference/bandit2arm_delta.html +++ b/docs/reference/bandit2arm_delta.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3
    @@ -109,9 +109,9 @@

    Two-Arm Bandit Task

    bandit2arm_delta(data = "choose", niter = 3000, nwarmup = 1000,
       nchain = 4, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars = "mean", saveDir = NULL, modelRegressor = FALSE, + vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95, + stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/bandit4arm_4par.html b/docs/reference/bandit4arm_4par.html index 5a49ee25..eedfa860 100644 --- a/docs/reference/bandit4arm_4par.html +++ b/docs/reference/bandit4arm_4par.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -109,9 +109,9 @@

    4-armed bandit task

    bandit4arm_4par(data = "choose", niter = 4000, nwarmup = 2000,
       nchain = 4, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/bandit4arm_lapse.html b/docs/reference/bandit4arm_lapse.html index d61a2d77..51f9e975 100644 --- a/docs/reference/bandit4arm_lapse.html +++ b/docs/reference/bandit4arm_lapse.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -109,9 +109,9 @@

    4-armed bandit task

    bandit4arm_lapse(data = "choose", niter = 4000, nwarmup = 2000,
       nchain = 4, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/bart_par4.html b/docs/reference/bart_par4.html index f451ff6c..34c21aa3 100644 --- a/docs/reference/bart_par4.html +++ b/docs/reference/bart_par4.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -109,11 +109,11 @@

    Balloon Analogue Risk Task (Ravenzwaaij et al., 2011, Journal of Mathematica -
    bart_par4(data = "choose", niter = 4000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "fixed", indPars = "mean",
    -  saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    bart_par4(data = "choose", niter = 4000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "fixed",
    +  indPars = "mean", saveDir = NULL, modelRegressor = FALSE,
    +  vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95,
    +  stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/choiceRT_ddm.html b/docs/reference/choiceRT_ddm.html index 0fb114a1..eb6c4d72 100644 --- a/docs/reference/choiceRT_ddm.html +++ b/docs/reference/choiceRT_ddm.html @@ -66,7 +66,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -113,11 +113,11 @@

    Choice Reaction Time task, drift diffusion modeling

    -
    choiceRT_ddm(data = "choose", niter = 3000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "fixed", indPars = "mean",
    -  saveDir = NULL, RTbound = 0.1, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    choiceRT_ddm(data = "choose", niter = 3000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "fixed",
    +  indPars = "mean", saveDir = NULL, RTbound = 0.1,
    +  modelRegressor = FALSE, vb = FALSE, inc_postpred = FALSE,
    +  adapt_delta = 0.95, stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/choiceRT_ddm_single.html b/docs/reference/choiceRT_ddm_single.html index 60c6f1b5..726ecb16 100644 --- a/docs/reference/choiceRT_ddm_single.html +++ b/docs/reference/choiceRT_ddm_single.html @@ -66,7 +66,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -114,10 +114,10 @@

    Choice Reaction Time task, drift diffusion modeling

    choiceRT_ddm_single(data = "choose", niter = 3000, nwarmup = 1000,
    -  nchain = 4, ncore = 1, nthin = 1, inits = "fixed", indPars = "mean",
    -  saveDir = NULL, RTbound = 0.1, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + nchain=4, ncore=1, nthin=1, inits="fixed", + indPars="mean", saveDir=NULL, RTbound=0.1, + modelRegressor=FALSE, vb=FALSE, inc_postpred=FALSE, + adapt_delta=0.95, stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/choiceRT_lba.html b/docs/reference/choiceRT_lba.html index fa28bfcf..ac0427e0 100644 --- a/docs/reference/choiceRT_lba.html +++ b/docs/reference/choiceRT_lba.html @@ -67,7 +67,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -115,11 +115,11 @@

    Choice Reaction Time task, linear ballistic accumulator modeling

    -
    choiceRT_lba(data = "choose", niter = 3000, nwarmup = 1000, nchain = 2,
    -  ncore = 2, nthin = 1, inits = "random", indPars = "mean",
    -  saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    choiceRT_lba(data = "choose", niter = 3000, nwarmup = 1000,
    +  nchain = 2, ncore = 2, nthin = 1, inits = "random",
    +  indPars = "mean", saveDir = NULL, modelRegressor = FALSE,
    +  vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95,
    +  stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/choiceRT_lba_single.html b/docs/reference/choiceRT_lba_single.html index f722b3ab..60be64dc 100644 --- a/docs/reference/choiceRT_lba_single.html +++ b/docs/reference/choiceRT_lba_single.html @@ -67,7 +67,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -117,9 +117,9 @@

    Choice Reaction Time task, linear ballistic accumulator modeling

    choiceRT_lba_single(data = "choose", niter = 3000, nwarmup = 1000,
       nchain = 2, ncore = 2, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/cra_exp.html b/docs/reference/cra_exp.html index 51da9b1d..94d8fe46 100644 --- a/docs/reference/cra_exp.html +++ b/docs/reference/cra_exp.html @@ -68,7 +68,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/cra_linear.html b/docs/reference/cra_linear.html index 31a71382..7c9b14e7 100644 --- a/docs/reference/cra_linear.html +++ b/docs/reference/cra_linear.html @@ -68,7 +68,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -117,11 +117,11 @@

    Choice under Risk and Ambiguity Task

    -
    cra_linear(data = "choose", niter = 2000, nwarmup = 1000, nchain = 1,
    -  ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    cra_linear(data = "choose", niter = 2000, nwarmup = 1000,
    +  nchain = 1, ncore = 1, nthin = 1, inits = "random",
    +  indPars = "mean", saveDir = NULL, modelRegressor = FALSE,
    +  vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95,
    +  stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/dd_cs.html b/docs/reference/dd_cs.html index 95864982..ec5d878f 100644 --- a/docs/reference/dd_cs.html +++ b/docs/reference/dd_cs.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/dd_cs_single.html b/docs/reference/dd_cs_single.html index 07d27f52..ec9a58c8 100644 --- a/docs/reference/dd_cs_single.html +++ b/docs/reference/dd_cs_single.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -107,11 +107,11 @@

    Delay Discounting Task (Ebert & Prelec, 2007)

    -
    dd_cs_single(data = "choose", niter = 3000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "fixed", indPars = "mean",
    -  saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    dd_cs_single(data = "choose", niter = 3000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "fixed",
    +  indPars = "mean", saveDir = NULL, modelRegressor = FALSE,
    +  vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95,
    +  stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/dd_exp.html b/docs/reference/dd_exp.html index 9435d176..ee997a9f 100644 --- a/docs/reference/dd_exp.html +++ b/docs/reference/dd_exp.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/dd_hyperbolic.html b/docs/reference/dd_hyperbolic.html index 01f45a95..154feec3 100644 --- a/docs/reference/dd_hyperbolic.html +++ b/docs/reference/dd_hyperbolic.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -107,11 +107,11 @@

    Delay Discounting Task

    -
    dd_hyperbolic(data = "choose", niter = 3000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    dd_hyperbolic(data = "choose", niter = 3000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "random",
    +  indPars = "mean", saveDir = NULL, modelRegressor = FALSE,
    +  vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95,
    +  stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/dd_hyperbolic_single.html b/docs/reference/dd_hyperbolic_single.html index 2dbf4c88..a9e36731 100644 --- a/docs/reference/dd_hyperbolic_single.html +++ b/docs/reference/dd_hyperbolic_single.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -109,9 +109,9 @@

    Delay Discounting Task (Ebert & Prelec, 2007)

    dd_hyperbolic_single(data = "choose", niter = 3000, nwarmup = 1000,
       nchain = 4, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/estimate_mode.html b/docs/reference/estimate_mode.html index 2661e7ef..b6de149a 100644 --- a/docs/reference/estimate_mode.html +++ b/docs/reference/estimate_mode.html @@ -62,7 +62,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/extract_ic.html b/docs/reference/extract_ic.html index 610bc611..dd8ed489 100644 --- a/docs/reference/extract_ic.html +++ b/docs/reference/extract_ic.html @@ -61,7 +61,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/gng_m1.html b/docs/reference/gng_m1.html index 2a717007..56deae4e 100644 --- a/docs/reference/gng_m1.html +++ b/docs/reference/gng_m1.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/gng_m2.html b/docs/reference/gng_m2.html index 4a8ad22d..0d62a6de 100644 --- a/docs/reference/gng_m2.html +++ b/docs/reference/gng_m2.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/gng_m3.html b/docs/reference/gng_m3.html index 45d9f84b..6a7da917 100644 --- a/docs/reference/gng_m3.html +++ b/docs/reference/gng_m3.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/gng_m4.html b/docs/reference/gng_m4.html index 98c6ac9f..cd5498eb 100644 --- a/docs/reference/gng_m4.html +++ b/docs/reference/gng_m4.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/hBayesDM-package.html b/docs/reference/hBayesDM-package.html index cf05d743..2627bbde 100644 --- a/docs/reference/hBayesDM-package.html +++ b/docs/reference/hBayesDM-package.html @@ -101,7 +101,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/igt_orl.html b/docs/reference/igt_orl.html index 8dc580e1..ef7da0a8 100644 --- a/docs/reference/igt_orl.html +++ b/docs/reference/igt_orl.html @@ -33,6 +33,7 @@ @@ -63,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -102,6 +103,7 @@

    Iowa Gambling Task

    Hierarchical Bayesian Modeling of the Iowa Gambling Task using the following parameters: "Arew" (reward learning rate), "Apun" (punishment learning rate), "K" (perseverance decay), "betaF" (outcome frequency weight), and "betaP" (perseverance weight).

    +

    Contributor: Nate Haines

    MODEL: Outcome-Representation Learning Model (Haines, Vassileva, & Ahn (in press) Cognitive Science))

    @@ -109,9 +111,9 @@

    Iowa Gambling Task

    igt_orl(data = "choose", niter = 3000, nwarmup = 1000, nchain = 4,
       ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  payscale = 100, saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + payscale = 100, saveDir = NULL, modelRegressor = FALSE, + vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95, + stepsize = 1, max_treedepth = 10)

    Arguments

    @@ -186,7 +188,7 @@

    Value

    modelData A class "hBayesDM" object with the following components:

    model

    Character string with the name of the model ("igt_orl").

    -
    allIndPars

    "data.frame" containing the summarized parameter +

    allIndPars

    "data.frame" containing the summarized parameter values (as specified by "indPars") for each subject.

    parVals

    A "list" where each element contains posterior samples over different model parameters.

    @@ -198,11 +200,11 @@

    Value

    Details

    This section describes some of the function arguments in greater detail.

    -

    data should be assigned a character value specifying the full path and name of the file, including the file extension -(e.g. ".txt"), that contains the behavioral data of all subjects of interest for the current analysis. -The file should be a text (.txt) file whose rows represent trial-by-trial observations and columns -represent variables. For the Iowa Gambling Task, there should be four columns of data with the labels -"subjID", "deck", "gain", and "loss". It is not necessary for the columns to be in this particular order, +

    data should be assigned a character value specifying the full path and name of the file, including the file extension +(e.g. ".txt"), that contains the behavioral data of all subjects of interest for the current analysis. +The file should be a text (.txt) file whose rows represent trial-by-trial observations and columns +represent variables. For the Iowa Gambling Task, there should be four columns of data with the labels +"subjID", "deck", "gain", and "loss". It is not necessary for the columns to be in this particular order, however it is necessary that they be labelled correctly and contain the information below:

    "subjID"

    A unique identifier for each subject within data-set to be analyzed.

    "deck"

    A nominal integer representing which deck was chosen within the given trial (e.g. A, B, C, or D == 1, 2, 3, or 4 in the IGT).

    @@ -211,28 +213,28 @@

    Details

    *Note: The data.txt file may contain other columns of data (e.g. "Reaction_Time", "trial_number", etc.), but only the data with the column names listed above will be used for analysis/modeling. As long as the columns above are present and labelled correctly, there is no need to remove other miscellaneous data columns.

    -

    nwarmup is a numerical value that specifies how many MCMC samples should not be stored upon the -beginning of each chain. For those familiar with Bayesian methods, this value is equivalent to a burn-in sample. -Due to the nature of MCMC sampling, initial values (where the sampling chain begins) can have a heavy influence -on the generated posterior distributions. The nwarmup argument can be set to a high number in order to curb the +

    nwarmup is a numerical value that specifies how many MCMC samples should not be stored upon the +beginning of each chain. For those familiar with Bayesian methods, this value is equivalent to a burn-in sample. +Due to the nature of MCMC sampling, initial values (where the sampling chain begins) can have a heavy influence +on the generated posterior distributions. The nwarmup argument can be set to a high number in order to curb the effects that initial values have on the resulting posteriors.

    nchain is a numerical value that specifies how many chains (i.e. independent sampling sequences) should be -used to draw samples from the posterior distribution. Since the posteriors are generated from a sampling +used to draw samples from the posterior distribution. Since the posteriors are generated from a sampling process, it is good practice to run multiple chains to ensure that a representative posterior is attained. When sampling is completed, the multiple chains may be checked for convergence with the plot(myModel, type = "trace") command. The chains should resemble a "furry caterpillar".

    -

    nthin is a numerical value that specifies the "skipping" behavior of the MCMC samples being chosen -to generate the posterior distributions. By default, nthin is equal to 1, hence every sample is used to +

    nthin is a numerical value that specifies the "skipping" behavior of the MCMC samples being chosen +to generate the posterior distributions. By default, nthin is equal to 1, hence every sample is used to generate the posterior.

    -

    Contol Parameters: adapt_delta, stepsize, and max_treedepth are advanced options that give the user more control +

    Contol Parameters: adapt_delta, stepsize, and max_treedepth are advanced options that give the user more control over Stan's MCMC sampler. The Stan creators recommend that only advanced users change the default values, as alterations -can profoundly change the sampler's behavior. Refer to Hoffman & Gelman (2014, Journal of Machine Learning Research) for -more information on the functioning of the sampler control parameters. One can also refer to section 58.2 of the +can profoundly change the sampler's behavior. Refer to Hoffman & Gelman (2014, Journal of Machine Learning Research) for +more information on the functioning of the sampler control parameters. One can also refer to section 58.2 of the Stan User's Manual for a less technical description of these arguments.

    References

    -

    Hoffman, M. D., & Gelman, A. (2014). The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. The +

    Hoffman, M. D., & Gelman, A. (2014). The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. The Journal of Machine Learning Research, 15(1), 1593-1623.

    Haines, N., Vassileva, J., Ahn, W.-Y. (in press). The Outcome-Representation Learning model: a novel reinforcement learning model of the Iowa Gambling Task. Cognitive Science.

    @@ -256,7 +258,7 @@

    Examp # Plot the posterior distributions of the hyper-parameters (distributions should be unimodal) plot(output) -# Show the WAIC and LOOIC model fit estimates +# Show the WAIC and LOOIC model fit estimates printFit(output) # } diff --git a/docs/reference/igt_pvl_decay.html b/docs/reference/igt_pvl_decay.html index 3bc60b2a..1d218348 100644 --- a/docs/reference/igt_pvl_decay.html +++ b/docs/reference/igt_pvl_decay.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -107,11 +107,11 @@

    Iowa Gambling Task

    -
    igt_pvl_decay(data = "choose", niter = 3000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  payscale = 100, saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    igt_pvl_decay(data = "choose", niter = 3000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "random",
    +  indPars = "mean", payscale = 100, saveDir = NULL,
    +  modelRegressor = FALSE, vb = FALSE, inc_postpred = FALSE,
    +  adapt_delta = 0.95, stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/igt_pvl_delta.html b/docs/reference/igt_pvl_delta.html index 279141ae..35353427 100644 --- a/docs/reference/igt_pvl_delta.html +++ b/docs/reference/igt_pvl_delta.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -107,11 +107,11 @@

    Iowa Gambling Task (Ahn et al., 2008)

    -
    igt_pvl_delta(data = "choose", niter = 3000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  payscale = 100, saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    igt_pvl_delta(data = "choose", niter = 3000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "random",
    +  indPars = "mean", payscale = 100, saveDir = NULL,
    +  modelRegressor = FALSE, vb = FALSE, inc_postpred = FALSE,
    +  adapt_delta = 0.95, stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/igt_vpp.html b/docs/reference/igt_vpp.html index 9df21f9c..fa1d3daf 100644 --- a/docs/reference/igt_vpp.html +++ b/docs/reference/igt_vpp.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -109,9 +109,9 @@

    Iowa Gambling Task

    igt_vpp(data = "choose", niter = 3000, nwarmup = 1000, nchain = 4,
       ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  payscale = 100, saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + payscale=100, saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/index.html b/docs/reference/index.html index b090dd46..ac92971b 100644 --- a/docs/reference/index.html +++ b/docs/reference/index.html @@ -58,7 +58,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/multiplot.html b/docs/reference/multiplot.html index 01ceef07..87187551 100644 --- a/docs/reference/multiplot.html +++ b/docs/reference/multiplot.html @@ -62,7 +62,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/peer_ocu.html b/docs/reference/peer_ocu.html index 759b6d4a..4e005362 100644 --- a/docs/reference/peer_ocu.html +++ b/docs/reference/peer_ocu.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/plot.hBayesDM.html b/docs/reference/plot.hBayesDM.html index 903ab09b..90e9b3b1 100644 --- a/docs/reference/plot.hBayesDM.html +++ b/docs/reference/plot.hBayesDM.html @@ -61,7 +61,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/plotDist.html b/docs/reference/plotDist.html index 705085a3..a78278a0 100644 --- a/docs/reference/plotDist.html +++ b/docs/reference/plotDist.html @@ -61,7 +61,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -103,8 +103,9 @@

    Plots the histogram of MCMC samples.

    -
    plotDist(sample = NULL, Title = NULL, xLab = "Value", yLab = "Density",
    -  xLim = NULL, fontSize = NULL, binSize = NULL, ...)
    +
    plotDist(sample = NULL, Title = NULL, xLab = "Value",
    +  yLab = "Density", xLim = NULL, fontSize = NULL, binSize = NULL,
    +  ...)

    Arguments

    diff --git a/docs/reference/plotHDI.html b/docs/reference/plotHDI.html index 5ec758ac..7835edb2 100644 --- a/docs/reference/plotHDI.html +++ b/docs/reference/plotHDI.html @@ -61,7 +61,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -103,8 +103,9 @@

    Plots highest density interval (HDI) from (MCMC) samples and prints HDI in t -
    plotHDI(sample = NULL, credMass = 0.95, Title = NULL, xLab = "Value",
    -  yLab = "Density", fontSize = NULL, binSize = 30, ...)
    +
    plotHDI(sample = NULL, credMass = 0.95, Title = NULL,
    +  xLab = "Value", yLab = "Density", fontSize = NULL, binSize = 30,
    +  ...)

    Arguments

    diff --git a/docs/reference/plotInd.html b/docs/reference/plotInd.html index 6c3701c5..8c00b62d 100644 --- a/docs/reference/plotInd.html +++ b/docs/reference/plotInd.html @@ -61,7 +61,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/printFit.html b/docs/reference/printFit.html index 4116add9..229bb8d3 100644 --- a/docs/reference/printFit.html +++ b/docs/reference/printFit.html @@ -61,7 +61,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/prl_ewa.html b/docs/reference/prl_ewa.html index 5368f73c..094f2c64 100644 --- a/docs/reference/prl_ewa.html +++ b/docs/reference/prl_ewa.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/prl_fictitious.html b/docs/reference/prl_fictitious.html index 1e55ec86..56e0d5d8 100644 --- a/docs/reference/prl_fictitious.html +++ b/docs/reference/prl_fictitious.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -111,9 +111,9 @@

    Probabilistic Reversal Learning Task

    prl_fictitious(data = "choice", niter = 3000, nwarmup = 1000,
       nchain = 1, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/prl_fictitious_multipleB.html b/docs/reference/prl_fictitious_multipleB.html index 6fcc5d03..bd439f60 100644 --- a/docs/reference/prl_fictitious_multipleB.html +++ b/docs/reference/prl_fictitious_multipleB.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -109,11 +109,11 @@

    Probabilistic Reversal Learning Task (Glascher et al, 2008), multiple blocks -
    prl_fictitious_multipleB(data = "choice", niter = 3000, nwarmup = 1000,
    -  nchain = 1, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    prl_fictitious_multipleB(data = "choice", niter = 3000,
    +  nwarmup = 1000, nchain = 1, ncore = 1, nthin = 1,
    +  inits = "random", indPars = "mean", saveDir = NULL,
    +  modelRegressor = FALSE, vb = FALSE, inc_postpred = FALSE,
    +  adapt_delta = 0.95, stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/prl_fictitious_rp.html b/docs/reference/prl_fictitious_rp.html index c31887f5..825b0626 100644 --- a/docs/reference/prl_fictitious_rp.html +++ b/docs/reference/prl_fictitious_rp.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -111,9 +111,9 @@

    Probabilistic Reversal Learning Task

    prl_fictitious_rp(data = "choice", niter = 3000, nwarmup = 1000,
       nchain = 1, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/prl_fictitious_rp_woa.html b/docs/reference/prl_fictitious_rp_woa.html index 3af403e4..a69400cb 100644 --- a/docs/reference/prl_fictitious_rp_woa.html +++ b/docs/reference/prl_fictitious_rp_woa.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -111,9 +111,9 @@

    Probabilistic Reversal Learning Task

    prl_fictitious_rp_woa(data = "choice", niter = 3000, nwarmup = 1000,
       nchain = 1, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/prl_fictitious_woa.html b/docs/reference/prl_fictitious_woa.html index 5c7e371a..6b90a49d 100644 --- a/docs/reference/prl_fictitious_woa.html +++ b/docs/reference/prl_fictitious_woa.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -111,9 +111,9 @@

    Probabilistic Reversal Learning Task

    prl_fictitious_woa(data = "choice", niter = 3000, nwarmup = 1000,
       nchain = 1, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/prl_rp.html b/docs/reference/prl_rp.html index 784cd3ce..6bb95cbe 100644 --- a/docs/reference/prl_rp.html +++ b/docs/reference/prl_rp.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/prl_rp_multipleB.html b/docs/reference/prl_rp_multipleB.html index 58709d56..d93af80d 100644 --- a/docs/reference/prl_rp_multipleB.html +++ b/docs/reference/prl_rp_multipleB.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -111,9 +111,9 @@

    Probabilistic Reversal Learning Task, multiple blocks per subject

    prl_rp_multipleB(data = "choice", niter = 3000, nwarmup = 1000,
       nchain = 1, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/pst_gainloss_Q.html b/docs/reference/pst_gainloss_Q.html index d25fa3fd..3fb178e6 100644 --- a/docs/reference/pst_gainloss_Q.html +++ b/docs/reference/pst_gainloss_Q.html @@ -68,7 +68,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -119,9 +119,9 @@

    Probabilistic Selection Task

    pst_gainloss_Q(data = "choose", niter = 2000, nwarmup = 1000,
       nchain = 1, ncore = 1, nthin = 1, inits = "random",
    -  indPars = "mean", saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    + indPars="mean", saveDir=NULL, modelRegressor=FALSE, + vb=FALSE, inc_postpred=FALSE, adapt_delta=0.95, + stepsize=1, max_treedepth=10)

    Arguments

    diff --git a/docs/reference/ra_noLA.html b/docs/reference/ra_noLA.html index 6e3f9d35..faf7af6d 100644 --- a/docs/reference/ra_noLA.html +++ b/docs/reference/ra_noLA.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/ra_noRA.html b/docs/reference/ra_noRA.html index 0d5cb4bc..86798867 100644 --- a/docs/reference/ra_noRA.html +++ b/docs/reference/ra_noRA.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/ra_prospect.html b/docs/reference/ra_prospect.html index faea3182..6f0f454d 100644 --- a/docs/reference/ra_prospect.html +++ b/docs/reference/ra_prospect.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -107,11 +107,11 @@

    Risk Aversion Task

    -
    ra_prospect(data = "choose", niter = 4000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    ra_prospect(data = "choose", niter = 4000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "random",
    +  indPars = "mean", saveDir = NULL, modelRegressor = FALSE,
    +  vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95,
    +  stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/rdt_happiness.html b/docs/reference/rdt_happiness.html index e6be2d94..28454ed9 100644 --- a/docs/reference/rdt_happiness.html +++ b/docs/reference/rdt_happiness.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 @@ -109,11 +109,11 @@

    Risky Decision Task

    -
    rdt_happiness(data = "choose", niter = 4000, nwarmup = 1000, nchain = 4,
    -  ncore = 1, nthin = 1, inits = "random", indPars = "mean",
    -  saveDir = NULL, modelRegressor = FALSE, vb = FALSE,
    -  inc_postpred = FALSE, adapt_delta = 0.95, stepsize = 1,
    -  max_treedepth = 10)
    +
    rdt_happiness(data = "choose", niter = 4000, nwarmup = 1000,
    +  nchain = 4, ncore = 1, nthin = 1, inits = "random",
    +  indPars = "mean", saveDir = NULL, modelRegressor = FALSE,
    +  vb = FALSE, inc_postpred = FALSE, adapt_delta = 0.95,
    +  stepsize = 1, max_treedepth = 10)

    Arguments

    diff --git a/docs/reference/rhat.html b/docs/reference/rhat.html index 01e1b78c..2eda6b38 100644 --- a/docs/reference/rhat.html +++ b/docs/reference/rhat.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/ts_par4.html b/docs/reference/ts_par4.html index 6bf94a50..302db5c6 100644 --- a/docs/reference/ts_par4.html +++ b/docs/reference/ts_par4.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/ts_par6.html b/docs/reference/ts_par6.html index 4eebebfd..ab97d234 100644 --- a/docs/reference/ts_par6.html +++ b/docs/reference/ts_par6.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/ts_par7.html b/docs/reference/ts_par7.html index 73be5c7b..825c237b 100644 --- a/docs/reference/ts_par7.html +++ b/docs/reference/ts_par7.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/ug_bayes.html b/docs/reference/ug_bayes.html index 41bf83ed..b40ec496 100644 --- a/docs/reference/ug_bayes.html +++ b/docs/reference/ug_bayes.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/ug_delta.html b/docs/reference/ug_delta.html index dee9c609..e568a448 100644 --- a/docs/reference/ug_delta.html +++ b/docs/reference/ug_delta.html @@ -63,7 +63,7 @@ hBayesDM - 0.6.0 + 0.6.3 diff --git a/docs/reference/wcs_sql.html b/docs/reference/wcs_sql.html index 47cfe85a..5e1f238b 100644 --- a/docs/reference/wcs_sql.html +++ b/docs/reference/wcs_sql.html @@ -64,7 +64,7 @@ hBayesDM - 0.6.0 + 0.6.3