diff --git a/404.html b/404.html index 7c6c362a..e6a0f709 100644 --- a/404.html +++ b/404.html @@ -18,7 +18,7 @@ - +
@@ -47,7 +47,7 @@
- +
@@ -136,16 +136,16 @@

Page not found (404)

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/CODE_OF_CONDUCT.html b/CODE_OF_CONDUCT.html index 2ae6ad9e..a60bc2bb 100644 --- a/CODE_OF_CONDUCT.html +++ b/CODE_OF_CONDUCT.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -175,15 +175,15 @@

Attribution -

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/CONTRIBUTING.html b/CONTRIBUTING.html index e648825d..9f19ff49 100644 --- a/CONTRIBUTING.html +++ b/CONTRIBUTING.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -138,15 +138,15 @@

Code of Conduct
-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/LICENSE-text.html b/LICENSE-text.html index 17c837fc..d83ba0ed 100644 --- a/LICENSE-text.html +++ b/LICENSE-text.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -783,15 +783,15 @@

License

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/SUPPORT.html b/SUPPORT.html index ca7e5e2e..25480fa5 100644 --- a/SUPPORT.html +++ b/SUPPORT.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -124,15 +124,15 @@

What happens next? -

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/clustered.html b/articles/clustered.html index 3f520d71..efe65c07 100644 --- a/articles/clustered.html +++ b/articles/clustered.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Clustered Data

- Source: vignettes/clustered.Rmd + Source: vignettes/clustered.Rmd
@@ -283,16 +282,16 @@

Setting cluster sizes

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/corelationmat.html b/articles/corelationmat.html index d11f5887..e6fd1034 100644 --- a/articles/corelationmat.html +++ b/articles/corelationmat.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Correlation Matrices

- Source: vignettes/corelationmat.Rmd + Source: vignettes/corelationmat.Rmd
@@ -276,9 +275,14 @@

Cluster-specific correlation matr parameters would be different depending on the cluster size).

In this example, I am generating matrices with a cs structure for four clusters with sizes 2, 3, 4, and 3, respectively, and -within-cluster correlations of \(\rho_1 = -0.6\), \(\rho_2 = 0.7\), \(\rho_3 = 0.5\), and \(\rho_4 = 0.4\). This reflects an overall -block correlation matrix that looks like this:

+within-cluster correlations of +ρ1=0.6\rho_1 = 0.6, +ρ2=0.7\rho_2 = 0.7, +ρ3=0.5\rho_3 = 0.5, +and +ρ4=0.4\rho_4 = 0.4. +This reflects an overall block correlation matrix that looks like +this:

R=(1.00.60.61.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.01.00.70.70.71.00.70.70.71.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.01.00.50.50.50.51.00.50.50.50.51.00.50.50.50.51.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.01.00.40.40.41.00.40.40.41.0)\footnotesize{ R = \left ( \begin{array}{c|c|c|c} @@ -407,13 +411,20 @@

Cluster-specific correlation matr over the matrix. In this case, individuals are correlated only with other individuals in the same cluster.

To generate this system of matrices, we just need to specify the -number of observations per cluster (\(nvars\)), the correlation coefficients for -each cluster (\(rho\), which in this -case is a vector), and the number of clusters. The \(nvars\) argument needs to match the numbers -of individuals in each cluster in the data set, and the lengths of \(nvars\) and \(rho\) must be the same as the number of -clusters (though either or both can be scalars, in which case the values -are shared across the clusters). The output is a list of correlation -matrices, one for each cluster.

+number of observations per cluster +(nvarsnvars), +the correlation coefficients for each cluster +(rhorho, +which in this case is a vector), and the number of clusters. The +nvarsnvars +argument needs to match the numbers of individuals in each cluster in +the data set, and the lengths of +nvarsnvars +and +rhorho +must be the same as the number of clusters (though either or both can be +scalars, in which case the values are shared across the clusters). The +output is a list of correlation matrices, one for each cluster.

 RC <- genCorMat(nvars = c(2, 3, 4, 3), rho = c(0.6, 0.7, 0.5, 0.4), 
   corstr = "cs", nclusters = 4)
@@ -444,7 +455,9 @@ 

Cluster-specific correlation matr ## [3,] 0.4 0.4 1.0

To create the correlated data, first we can generate a data set of individuals that are clustered in groups. The outcome will be Poisson -distributed, so we are specifying mean \(\lambda\) for each cluster:

+distributed, so we are specifying mean +λ\lambda +for each cluster:

 d1 <- defData(varname = "n", formula = "c(2, 3, 4, 3)", dist = "nonrandom")
 d1 <- defData(d1, varname = "lambda", formula = "c(6, 7, 9, 8)", dist = "nonrandom")
@@ -598,10 +611,12 @@ 

Cross-sectional dataExchangeable

Under the assumption of exchangeability, there is a constant -within-period correlation (\(\rho_w\)) +within-period correlation +(ρw\rho_w) across all study participants in the same period. For participants in -different periods, the between-period correlation (\(\rho_b\)) is different (presumably lower) -but constant over time.

+different periods, the between-period correlation +(ρb\rho_b) +is different (presumably lower) but constant over time.

A conceptual diagram of this exchangeable correlation matrix for a cross-sectional design is shown below; it includes three periods and two individuals per period. Each box represents a different time period. So, @@ -683,17 +698,28 @@

Exchangeable

Decay

-

Under the assumption of decay, the within-period correlation (\(\rho_w\)) is the same as under the -exchangeability assumptions. The between-period correlation is now a -function of the difference in time when the two individuals were -measured. It is \(\rho_w * r^{|s-t|}\), -where \(r\) is a decay parameter -between 0 and 1, and \(s\) and \(t\) are the periods under consideration. -For example, in the lower left-hand box, we have the correlation between -individuals in the first period (\(s=1\)) and individuals in the third period -(\(t=3\)), which gives a correlation -coefficient of \(\rho_w \times r^{|1-3|} = -\rho_w \times r^2\). As the difference in periods grows, \(r^{|s-t|}\) gets smaller.

+

Under the assumption of decay, the within-period correlation +(ρw\rho_w) +is the same as under the exchangeability assumptions. The between-period +correlation is now a function of the difference in time when the two +individuals were measured. It is +ρw*r|st|\rho_w * r^{|s-t|}, +where +rr +is a decay parameter between 0 and 1, and +ss +and +tt +are the periods under consideration. For example, in the lower left-hand +box, we have the correlation between individuals in the first period +(s=1s=1) +and individuals in the third period +(t=3t=3), +which gives a correlation coefficient of +ρw×r|13|=ρw×r2\rho_w \times r^{|1-3|} = \rho_w \times r^2. +As the difference in periods grows, +r|st|r^{|s-t|} +gets smaller.


R=(1ρwρw1ρwrρwrρwrρwrρwr2ρwr2ρwr2ρwr2ρwrρwrρwrρwr1ρwρw1ρwrρwrρwrρwrρwr2ρwr2ρwr2ρwr2ρwrρwrρwrρwr1ρwρw1)\footnotesize{ @@ -786,10 +812,14 @@

Exchangeable\(\rho_a\). The -within-period between-individual correlation is still \(\rho_w\), and the between-period -between-individual correlation is still \(\rho_b\). All of these correlations remain -constant in the exchangeable framework:

+correlation coefficient +ρa\rho_a. +The within-period between-individual correlation is still +ρw\rho_w, +and the between-period between-individual correlation is still +ρb\rho_b. +All of these correlations remain constant in the exchangeable +framework:


R=(1ρwρw1ρaρbρbρaρaρbρbρaρaρbρbρa1ρwρw1ρaρbρbρaρaρbρbρaρaρbρbρa1ρwρw1)\footnotesize{ @@ -866,11 +896,14 @@

Decay

The decay structure under an assumption of a closed cohort is the last of the four possible variations. The within-period -between-individual correlation \(\rho_w\) remains the same as the -cross-sectional case, and so does the between-period between-individual -correlation \(\rho_wr^{|s-t|}\). +between-individual correlation +ρw\rho_w +remains the same as the cross-sectional case, and so does the +between-period between-individual correlation +ρwr|st|\rho_wr^{|s-t|}. However, the between-period within-individual correlation is specified -as \(r^{|s-t|}\):

+as +r|st|r^{|s-t|}:


R=(1ρwρw1rρwrρwrrr2ρwr2ρwr2r2rρwrρwrr1ρwρw1rρwrρwrrr2ρwr2ρwr2r2rρwrρwrr1ρwρw1)\footnotesize{ @@ -953,8 +986,10 @@

Generating block matrices

Cross-sectional data with exchangeable correlation

-

In the first example, we specify \(\rho_w = -0.5\) and \(\rho_b = 0.3\):

+

In the first example, we specify +ρw=0.5\rho_w = 0.5 +and +ρb=0.3\rho_b = 0.3:

 library(simstudy)
 library(data.table)
@@ -971,8 +1006,9 @@ 

Cross-sectional data ## [5,] 0.3 0.3 0.3 0.3 1.0 0.5 ## [6,] 0.3 0.3 0.3 0.3 0.5 1.0

The correlated data are generated using genCorGen, using -the correlation matrix \(R_XE\). I am -effectively generating 5000 data sets with 6 observations each, all +the correlation matrix +RXER_XE. +I am effectively generating 5000 data sets with 6 observations each, all based on a Poisson distribution with mean = 7. I then report the empirical correlation matrix.

@@ -991,8 +1027,10 @@ 

Cross-sectional data

Cross-sectional data with correlation decay

-

Here, there is a decay parameter \(r = -0.8\) and no parameter \(\rho_b\).

+

Here, there is a decay parameter +r=0.8r = 0.8 +and no parameter +ρb\rho_b.

 R_XD <- blockDecayMat(ninds = 2 , nperiods = 3, rho_w = 0.5,
   r = 0.8, pattern = "xsection")
@@ -1019,7 +1057,10 @@ 

Cross-sectional data with c

Cohort data with exchangeable correlation

-

Since we have a cohort, we introduce \(\rho_a\) = 0.4, and specify \(pattern = \text{"cohort"}\):

+

Since we have a cohort, we introduce +ρa\rho_a += 0.4, and specify +pattern="cohort"pattern = \text{"cohort"}:

 R_CE <- blockExchangeMat(ninds = 2 , nperiods = 3, rho_w = 0.5, 
   rho_b = 0.3, rho_a = 0.4, pattern = "cohort")
@@ -1048,8 +1089,8 @@ 

Cohort data with correlation decay

In the final case, the parameterization for decaying correlation with a cohort is the same as a decay in the case of a cross sectional design; -the only difference that we set \(pattern = -\text{"cohort"}\):

+the only difference that we set +pattern="cohort"pattern = \text{"cohort"}:

 R_CD <- blockDecayMat(ninds = 2 , nperiods = 3, rho_w = 0.5, 
   r = 0.8, pattern = "cohort")
@@ -1084,10 +1125,12 @@ 

Varying correlation matrices by parameters by cluster.

In this example, there are 10 clusters and three periods. The number of individuals per cluster per period ranges from two to four, and are -randomly generated. The decay rate \(r\) varies by cluster (generated using the -beta distribution with shape parameters 6 and 2). The parameter -\(\rho_w\) is constant across all -clusters and is 0.6

+randomly generated. The decay rate +rr +varies by cluster (generated using the beta distribution with +shape parameters 6 and 2). The parameter +ρw\rho_w +is constant across all clusters and is 0.6

 defC <- defData(varname = "lambda", formula = "sample(5:10, 1)", dist = "nonrandom")
 defP <- defDataAdd(varname = "n", formula = "2;4", dist="uniformInt")
@@ -1198,9 +1241,7 @@ 

Varying correlation matrices by +

@@ -1213,16 +1254,16 @@

Varying correlation matrices by

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/correlated.html b/articles/correlated.html index 082fab87..eea6aacd 100644 --- a/articles/correlated.html +++ b/articles/correlated.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Correlated Data

- Source: vignettes/correlated.Rmd + Source: vignettes/correlated.Rmd
@@ -543,9 +542,7 @@

Correlated data: additional di +

@@ -558,16 +555,16 @@

Correlated data: additional di

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/customdist.html b/articles/customdist.html index 8eed2454..c782c4f4 100644 --- a/articles/customdist.html +++ b/articles/customdist.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Customized Distributions

- Source: vignettes/customdist.Rmd + Source: vignettes/customdist.Rmd
@@ -130,33 +129,56 @@

Customized Distributions

the user-defined function as a string in the formula argument. The arguments of the custom function are listed in the variance argument, separated by commas and formatted as “arg_1 = -val_form_1, arg_2 = val_form_2, \(\dots\), arg_K = val_form_K”.

+val_form_1, arg_2 = val_form_2, +\dots, +arg_K = val_form_K”.

Here, the arg_k’s represent the names of the arguments -passed to the customized function, where \(k\) ranges from \(1\) to \(K\). You can use values or formulas for -each val_form_k. If formulas are used, ensure that the -variables have been previously generated. Double dot notation is -available in specifying value_formula_k. One important -requirement of the custom function is that the parameter list used to -define the function must include an argument”n = n”, -but do not include \(n\) in the -definition as part of defData or +passed to the customized function, where +kk +ranges from +11 +to +KK. +You can use values or formulas for each val_form_k. If formulas +are used, ensure that the variables have been previously generated. +Double dot notation is available in specifying value_formula_k. +One important requirement of the custom function is that the parameter +list used to define the function must include an argument”n = +n”, but do not include +nn +in the definition as part of defData or defDataAdd.

Example 1

Here is an example where we would like to generate data from a zero-inflated beta distribution. In this case, there is a user-defined -function zeroBeta that takes on shape parameters \(a\) and \(b\), as well as \(p_0\), the proportion of the sample that is -zero. Note that the function also takes an argument \(n\) that will not to be be specified in the -data definition; \(n\) will represent -the number of observations being generated:

+function zeroBeta that takes on shape parameters +aa +and +bb, +as well as +p0p_0, +the proportion of the sample that is zero. Note that the function also +takes an argument +nn +that will not to be be specified in the data definition; +nn +will represent the number of observations being generated:

 zeroBeta <- function(n, a, b, p0) {
   betas <- rbeta(n, a, b)
   is.zero <- rbinom(n, 1, p0)
   betas*!(is.zero)
 }
-

The data definition specifies a new variable \(zb\) that sets \(a\) and \(b\) to 0.75, and \(p_0 = 0.02\):

+

The data definition specifies a new variable +zbzb +that sets +aa +and +bb +to 0.75, and +p0=0.02p_0 = 0.02:

 def <- defData(
   varname = "zb", 
@@ -189,12 +211,15 @@ 

Example 1Example 2

In this second example, we are generating sets of truncated Gaussian -distributions with means ranging from \(-1\) to \(1\). The limits of the truncation vary -across three different groups. rnormt is a customized -(user-defined) function that generates the truncated Gaussiandata. The -function requires four arguments (the left truncation value, the right -truncation value, the distribution average and the standard -deviation).

+distributions with means ranging from +1-1 +to +11. +The limits of the truncation vary across three different groups. +rnormt is a customized (user-defined) function that +generates the truncated Gaussiandata. The function requires four +arguments (the left truncation value, the right truncation value, the +distribution average and the standard deviation).

- - + + diff --git a/articles/double_dot_extension.html b/articles/double_dot_extension.html index 0094583e..d5897ce7 100644 --- a/articles/double_dot_extension.html +++ b/articles/double_dot_extension.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Dynamic Data Definition

- Source: vignettes/double_dot_extension.Rmd + Source: vignettes/double_dot_extension.Rmd
@@ -339,11 +338,15 @@

Using non-scalar double- ## 3: 3 1.33 8.69 14.7 7.13 ## 4: 4 6.19 9.04 12.0 8.52

These arrays can also have multiple dimensions, as -in a \(2 \times 2\) matrix. If we want -to specify the mean outcomes for a factorial study design with two -interventions \(a\) and \(b\), we can use a simple matrix and draw -the means directly from the matrix, which in this example is stored in -the variable effect:

+in a +2×22 \times 2 +matrix. If we want to specify the mean outcomes for a factorial study +design with two interventions +aa +and +bb, +we can use a simple matrix and draw the means directly from the matrix, +which in this example is stored in the variable effect:

 effect <- matrix(c(0, 4, 5, 7), nrow = 2)
 effect
@@ -386,9 +389,7 @@

Using non-scalar double- +

@@ -401,16 +402,16 @@

Using non-scalar double-

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/index.html b/articles/index.html index d6fdf1a7..d629328d 100644 --- a/articles/index.html +++ b/articles/index.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@

- +
@@ -132,15 +132,15 @@

All vignettes

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/logisticCoefs.html b/articles/logisticCoefs.html index 4adf60f9..435c908f 100644 --- a/articles/logisticCoefs.html +++ b/articles/logisticCoefs.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Targeted logistic model coefficients

- Source: vignettes/logisticCoefs.Rmd + Source: vignettes/logisticCoefs.Rmd
@@ -153,11 +152,14 @@

Targeted logistic model coefficients

Prevalence

In this first example, we start with one set of assumptions for four -covariates \(x_1, x2 \sim N(0, 1)\), -\(b_1 \sim Bin(0.3)\), and \(b_2 \sim Bin(0.7)\), and generate the -outcome y with the following data generating process:

-

\[ \text{logit}(y) = 0.15x_1 + 0.25x_2 + -0.10b_1 + 0.30b_2\]

+covariates +x1,x2N(0,1)x_1, x2 \sim N(0, 1), +b1Bin(0.3)b_1 \sim Bin(0.3), +and +b2Bin(0.7)b_2 \sim Bin(0.7), +and generate the outcome y with the following data generating +process:

+

logit(y)=0.15x1+0.25x2+0.10b1+0.30b2 \text{logit}(y) = 0.15x_1 + 0.25x_2 + 0.10b_1 + 0.30b_2

 library(simstudy)
 library(ggplot2)
@@ -192,15 +194,15 @@ 

Prevalence## 499998: 499998 -1.10 0.380 1 0 0 ## 499999: 499999 0.56 -1.042 0 1 0 ## 500000: 500000 0.52 0.076 0 1 1

-

The overall proportion of \(y=1\) in -this case is

+

The overall proportion of +y=1y=1 +in this case is

## [1] 0.56

If we have a desired marginal proportion of 0.40, then we can add an intercept of -0.66 to the data generating process:

-

\[ \text{logit}(y) = -0.66 + 0.15x_1 + -0.25x_2 + 0.10b_1 + 0.30b_2\]

+

logit(y)=0.66+0.15x1+0.25x2+0.10b1+0.30b2 \text{logit}(y) = -0.66 + 0.15x_1 + 0.25x_2 + 0.10b_1 + 0.30b_2

The simulation now gives us the desired target:

 d1a <- defData(d1, varname = "y", 
@@ -209,10 +211,15 @@ 

PrevalencegenData(500000, d1a)[, mean(y)]

## [1] 0.4
-

If we change the distribution of the covariates, so that \(x_1 \sim N(1, 1)\), \(x_2 \sim N(2, 1)\), \(b_1 \sim Bin(0.5)\), and \(b_2 \sim Bin(0.8)\), and the strength of -the association of these covariates with the outcome so that

-

\[ \text{logit}(y) = 0.20x_1 + 0.35x_2 + -0.20b_1 + 0.45b_2,\]

+

If we change the distribution of the covariates, so that +x1N(1,1)x_1 \sim N(1, 1), +x2N(2,1)x_2 \sim N(2, 1), +b1Bin(0.5)b_1 \sim Bin(0.5), +and +b2Bin(0.8)b_2 \sim Bin(0.8), +and the strength of the association of these covariates with the outcome +so that

+

logit(y)=0.20x1+0.35x2+0.20b1+0.45b2, \text{logit}(y) = 0.20x_1 + 0.35x_2 + 0.20b_1 + 0.45b_2,

the marginal proportion/prevalence (assuming no intercept term) also changes, going from 0.56 to 0.84:

@@ -231,8 +238,7 @@ 

Prevalence## [1] 0.84

But under this new distribution, adding an intercept of -2.13 yields the desired target.

-

\[ \text{logit}(y) = -2.13 + 0.20x_1 + -0.35x_2 + 0.20b_1 + 0.45b_2 \]

+

logit(y)=2.13+0.20x1+0.35x2+0.20b1+0.45b2 \text{logit}(y) = -2.13 + 0.20x_1 + 0.35x_2 + 0.20b_1 + 0.45b_2


 d2a <- defData(d2, varname = "y", 
@@ -282,13 +288,16 @@ 

Risk ratios\(A =1\) to control (\(A=0\)) (given the distribution of -covariates) is

-

\[RR = \frac{P(y=1 | A = 1)}{P(y=1 | A = -0)}\] In the data generation process we use a log-odds ratio of --0.40 (odds ratio of approximately 0.67) in both cases, but we get -different risk ratios (0.82 vs. 0.93), depending on the covariates -(defined in d1 and d2).

+The marginal risk ratio comparing treatment +(A=1A =1 +to control +(A=0A=0) +(given the distribution of covariates) is

+

RR=P(y=1|A=1)P(y=1|A=0)RR = \frac{P(y=1 | A = 1)}{P(y=1 | A = 0)} +In the data generation process we use a log-odds ratio of -0.40 (odds +ratio of approximately 0.67) in both cases, but we get different risk +ratios (0.82 vs. 0.93), depending on the covariates (defined in +d1 and d2).

 d1a <- defData(d1, varname = "rx", formula = "1;1", dist = "trtAssign")
 d1a <- defData(d1a, varname = "y",
@@ -320,9 +329,10 @@ 

Risk ratiosC1

##    B0     A    x1    x2    b1    b2 
 ## -0.66 -0.26  0.15  0.25  0.10  0.30
-

If we use \(C_1\) in the data -generation process, we will get a data set with the desired target -prevalence and risk ratio:

+

If we use +C1C_1 +in the data generation process, we will get a data set with the desired +target prevalence and risk ratio:

##    B0     A    x1    x2    b1    b2 
 ## -0.66 -0.71  0.15  0.25  0.10  0.30
-

Again, using \(C_1\) in the data -generation process, we will get a data set with the desired target -prevalence and risk difference:

+

Again, using +C1C_1 +in the data generation process, we will get a data set with the desired +target prevalence and risk difference:

 d1a <- defData(d1, varname = "rx", formula = "1;1", dist = "trtAssign")
 d1a <- defData(d1a, varname = "y",
@@ -387,10 +397,11 @@ 

AUC C1

##    B0    x1    x2    b1    b2 
 ## -1.99  0.85  1.41  0.56  1.69
-

Again, using \(C_1\) in the data -generation process, we will get a data set with the desired target -prevalence and the AUC (calculated here using the lrm -function in the rms package:

+

Again, using +C1C_1 +in the data generation process, we will get a data set with the desired +target prevalence and the AUC (calculated here using the +lrm function in the rms package:

 d1a <- defData(d1, varname = "y",
   formula = "t(..C1) %*% c(1, x1, x2, b1, b2)",
@@ -433,16 +444,16 @@ 

AUC

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/longitudinal.html b/articles/longitudinal.html index 20e8ee8f..7d5c2e7f 100644 --- a/articles/longitudinal.html +++ b/articles/longitudinal.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Longitudinal Data

- Source: vignettes/longitudinal.Rmd + Source: vignettes/longitudinal.Rmd
@@ -255,9 +254,7 @@

Longitudi +

@@ -270,16 +267,16 @@

Longitudi

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/missing.html b/articles/missing.html index 7ec3cea9..38f3383d 100644 --- a/articles/missing.html +++ b/articles/missing.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Missing Data

- Source: vignettes/missing.Rmd + Source: vignettes/missing.Rmd
@@ -322,9 +321,7 @@

Longitudinal data with missingness

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/ordinal.html b/articles/ordinal.html index 521515b7..3055a710 100644 --- a/articles/ordinal.html +++ b/articles/ordinal.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Ordinal Categorical Data

- Source: vignettes/ordinal.Rmd + Source: vignettes/ordinal.Rmd
@@ -136,22 +135,24 @@

Ordinal Categorical Data

probabilities, odds, or log-odds. Comparisons of different exposures or individual characteristics typically look at how these cumulative measures vary across the different exposures or characteristics. So, if -we were interested in cumulative odds, we would compare \[\small{\frac{P(response = 1|exposed)}{P(response +we were interested in cumulative odds, we would compare $$\small{\frac{P(response = 1|exposed)}{P(response > 1|exposed)} \ \ vs. \ \frac{P(response = 1|unexposed)}{P(response -> 1|unexposed)}},\]

+> 1|unexposed)}},$$

and continue until the last (in this case, third) comparison

-

\[\small{\frac{P(response \le +

$$\small{\frac{P(response \le 3|exposed)}{P(response > 3|exposed)} \ \ vs. \ \frac{P(response \le -3|unexposed)}{P(response > 3|unexposed)}},\]

+3|unexposed)}{P(response > 3|unexposed)}},$$

We can use an underlying (continuous) latent process as the basis for data generation. If we assume that probabilities are determined by segments of a logistic distribution (see below), we can define the ordinal mechanism using thresholds along the support of the -distribution. If there are \(k\) +distribution. If there are +kk possible responses (in the meat example, we have 4), then there will be -\(k-1\) thresholds. The area under the -logistic density curve of each of the regions defined by those -thresholds (there will be \(k\) +k1k-1 +thresholds. The area under the logistic density curve of each of the +regions defined by those thresholds (there will be +kk distinct regions) represents the probability of each possible response tied to that region.

## Warning: A numeric `legend.position` argument in `theme()` was deprecated in ggplot2
@@ -168,10 +169,10 @@ 

Comparing res odds ratio of one population relative to another is constant across all the possible responses. This means that all of the cumulative odds ratios are equal:

-

\[\small{\frac{codds(P(Resp = 1 | +

$$\small{\frac{codds(P(Resp = 1 | exposed))}{codds(P(Resp = 1 | unexposed))} = \ ... \ = \frac{codds(P(Resp \leq 3 | exposed))}{codds(P(Resp \leq 3 | -unexposed))}}\]

+unexposed))}}$$

In terms of the underlying process, this means that each of the thresholds shifts the same amount (as shown below) where we add 0.7 units to each threshold that was set for the exposed group. What this @@ -190,22 +191,32 @@

The cumulative proportional odds

In the R package ordinal, the model is fit using function clm. The model that is being estimated has the form

-

\[log \left( \frac{P(Resp \leq i)}{P(Resp -> i)} | Group \right) = \alpha_i - \beta*I(Group=exposed) \ \ , \ i -\in \{1, 2, 3\}\]

+

log(P(Respi)P(Resp>i)|Group)=αiβ*I(Group=exposed),i{1,2,3}log \left( \frac{P(Resp \leq i)}{P(Resp > i)} | Group \right) = \alpha_i - \beta*I(Group=exposed) \ \ , \ i \in \{1, 2, 3\}

The model specifies that the cumulative log-odds for a particular -category is a function of two parameters, \(\alpha_i\) and \(\beta\). (Note that in this -parameterization and the model fit, \(-\beta\) is used.) \(\alpha_i\) represents the cumulative log -odds of being in category \(i\) or -lower for those in the reference exposure group, which in our example is -Group A. \(\alpha_i\) also -represents the threshold of the latent continuous (logistic) data -generating process. \(\beta\) is -the cumulative log-odds ratio for the category \(i\) comparing the unexposed to reference -group, which is the exposed. \(\beta\) also represents the shift of the -threshold on the latent continuous process for the exposed relative to -the unexposed. The proportionality assumption implies that the -shift of the threshold for each of the categories is identical.

+category is a function of two parameters, +αi\alpha_i +and +β\beta. +(Note that in this parameterization and the model fit, +β-\beta +is used.) +αi\alpha_i +represents the cumulative log odds of being in category +ii +or lower for those in the reference exposure group, which in our example +is Group A. +αi\alpha_i +also represents the threshold of the latent continuous (logistic) data +generating process. +β\beta +is the cumulative log-odds ratio for the category +ii +comparing the unexposed to reference group, which is the exposed. +β\beta +also represents the shift of the threshold on the latent continuous +process for the exposed relative to the unexposed. The +proportionality assumption implies that the shift of the threshold for +each of the categories is identical.

Simulation @@ -247,10 +258,13 @@

Simulation## 2|3 0.3768 0.0173 21.8 ## 3|4 1.3696 0.0201 68.3

In the model output, the exposed coefficient of -1.15 is -the estimate of \(-\beta\) (i.e. \(\hat{\beta} = 1.15\)), which was set to --1.1 in the simulation. The threshold coefficients are the estimates of -the \(\alpha_i\)’s in the model - and -match the thresholds for the unexposed group.

+the estimate of +β-\beta +(i.e. β̂=1.15\hat{\beta} = 1.15), +which was set to -1.1 in the simulation. The threshold coefficients are +the estimates of the +αi\alpha_i’s +in the model - and match the thresholds for the unexposed group.

The log of the cumulative odds for groups 1 to 4 from the data without exposure are

@@ -443,16 +456,16 @@

Correlated multivariate ordinal da

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/simstudy.html b/articles/simstudy.html index 90397eb4..94d47aa3 100644 --- a/articles/simstudy.html +++ b/articles/simstudy.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Simulating Study Data

- Source: vignettes/simstudy.Rmd + Source: vignettes/simstudy.Rmd
@@ -151,12 +150,23 @@

Simulating Study Data

processes may not follow the standard parameterizations for the specific distributions. For example, in simstudy gamma-distributed data are generated based on the specification -of a mean \(\mu\) (or \(\log(\mu)\)) and a dispersion \(d\), rather than shape \(\alpha\) and rate \(\beta\) parameters that more typically -characterize the gamma distribution. When we estimate the -parameters, we are modeling \(\mu\) (or -some function of \((\mu)\)), so we -should explicitly recover the simstudy parameters used to -generate the model - illuminating the relationship between the +of a mean +μ\mu +(or +log(μ)\log(\mu)) +and a dispersion +dd, +rather than shape +α\alpha +and rate +β\beta +parameters that more typically characterize the gamma +distribution. When we estimate the parameters, we are modeling +μ\mu +(or some function of +(μ)(\mu)), +so we should explicitly recover the simstudy parameters +used to generate the model - illuminating the relationship between the underlying data generating processes and the models.

Overview @@ -243,10 +253,11 @@

Defining the DataGenerating the data

After the data set definitions have been created, a new data set with -\(n\) observations can be created with -a call to function genData. In this -example, 1,000 observations are generated using the data set definitions -in def, and then stored in the object +nn +observations can be created with a call to function +genData. In this example, 1,000 +observations are generated using the data set definitions in +def, and then stored in the object dd:

 set.seed(87261)
@@ -505,7 +516,7 @@ 

Distributions- - X -- +X X @@ -515,7 +526,7 @@

Distributions- # of trials X -- +X X @@ -526,7 +537,7 @@

Distributionsa;b;c X - -- +X clusterSize @@ -664,53 +675,100 @@

Distributionsbeta

A beta distribution is a continuous data distribution that -takes on values between \(0\) and \(1\). The formula specifies the -mean \(\mu\) (with the ‘identity’ link) -or the log-odds of the mean (with the ‘logit’ link). The scalar value in -the ‘variance’ represents the dispersion value \(d\). The variance \(\sigma^2\) for a beta distributed variable -will be \(\mu (1- \mu)/(1 + d)\). +takes on values between +00 +and +11. +The formula specifies the mean +μ\mu +(with the ‘identity’ link) or the log-odds of the mean (with the ‘logit’ +link). The scalar value in the ‘variance’ represents the dispersion +value +dd. +The variance +σ2\sigma^2 +for a beta distributed variable will be +μ(1μ)/(1+d)\mu (1- \mu)/(1 + d). Typically, the beta distribution is specified using two shape parameters -\(\alpha\) and \(\beta\), where \(\mu = \alpha/(\alpha + \beta)\) and \(\sigma^2 = \alpha\beta/[(\alpha + \beta)^2 (\alpha -+ \beta + 1)]\).

+α\alpha +and +β\beta, +where +μ=α/(α+β)\mu = \alpha/(\alpha + \beta) +and +σ2=αβ/[(α+β)2(α+β+1)]\sigma^2 = \alpha\beta/[(\alpha + \beta)^2 (\alpha + \beta + 1)].

binary

A binary distribution is a discrete data distribution that -takes values \(0\) or \(1\). (It is more conventionally called a -Bernoulli distribution, or is a binomial distribution -with a single trial \(n=1\).) The -formula represents the probability (with the ‘identity’ -link) or the log odds (with the ‘logit’ link) that the variable takes -the value of 1. The mean of this distribution is \(p\), and variance \(\sigma^2\) is \(p(1-p)\).

+takes values +00 +or +11. +(It is more conventionally called a Bernoulli distribution, or +is a binomial distribution with a single trial +n=1n=1.) +The formula represents the probability (with the ‘identity’ +link), the relative risk (with the ‘log’ link), or the log odds (with +the ‘logit’ link) that the variable takes the value of 1. The mean of +this distribution is +pp, +and variance +σ2\sigma^2 +is +p(1p)p(1-p).

binomial

A binomial distribution is a discrete data distribution that represents the count of the number of successes given a number of -trials. The formula specifies the probability of success \(p\), and the variance field is used to -specify the number of trials \(n\). -Given a value of \(p\), the mean \(\mu\) of this distribution is \(n*p\), and the variance \(\sigma^2\) is \(np(1-p)\).

+trials. The formula specifies the probability of success (with the +‘identity’ link), the relative risk (with the ‘log’ link), or the log +odds (with the ‘logit’ link) that the variable takes the value of 1. and +the variance field is used to specify the number of trials +nn. +Given a value of +pp, +the mean +μ\mu +of this distribution is +n*pn*p, +and the variance +σ2\sigma^2 +is +np(1p)np(1-p).

categorical

A categorical distribution is a discrete data distribution -taking on values from \(1\) to \(K\), with each value representing a -specific category, and there are \(K\) +taking on values from +11 +to +KK, +with each value representing a specific category, and there are +KK categories. The categories may or may not be ordered. For a categorical -variable with \(k\) categories, the -formula is a string of probabilities that sum to 1, each -separated by a semi-colon: \((p_1 ; p_2 ; ... -; p_k)\). \(p_1\) is the -probability of the random variable falling in category \(1\), \(p_2\) is the probability of category \(2\), etc. The probabilities can be -specified as functions of other variables previously defined. The helper -function genCatFormula is an easy way to create different -probability strings. The link options are identity -or logit. The variance field is optional an allows -to provide categories other than the default 1...n in the -same format as formula: “a;b;c”. Numeric variance Strings +variable with +kk +categories, the formula is a string of probabilities that +sum to 1, each separated by a semi-colon: +(p1;p2;...;pk)(p_1 ; p_2 ; ... ; p_k). +p1p_1 +is the probability of the random variable falling in category +11, +p2p_2 +is the probability of category +22, +etc. The probabilities can be specified as functions of other variables +previously defined. The helper function genCatFormula is an +easy way to create different probability strings. The link +options are identity or logit. The +variance field is optional an allows to provide categories +other than the default 1...n in the same format as +formula: “a;b;c”. Numeric variance Strings (e.g. “50;100;200”) will be converted to numeric when possible. All probabilities will be rounded to 1e12 decimal points to prevent possible rounding errors.

@@ -737,17 +795,23 @@

custom user-defined function as a string in the formula argument. The arguments of the custom function are listed in the variance argument, separated by commas and formatted as “arg_1 = -val_form_1, arg_2 = val_form_2, \(\dots\), arg_K = val_form_K”. The -arg_k’s represent the names of the arguments passed to the -customized function, where \(k\) ranges -from \(1\) to \(K\). Values or formulas can be used for -each val_form_k. If formulas are used, ensure that the -variables have been previously generated. Double dot notation is -available in specifying value_formula_k. One important -requirement of the custom function is that the parameter list used to -define the function must include an argument”n = n”, -but do not include \(n\) in the -definition as part of defData or +val_form_1, arg_2 = val_form_2, +\dots, +arg_K = val_form_K”. The arg_k’s represent the names +of the arguments passed to the customized function, where +kk +ranges from +11 +to +KK. +Values or formulas can be used for each val_form_k. If formulas +are used, ensure that the variables have been previously generated. +Double dot notation is available in specifying value_formula_k. +One important requirement of the custom function is that the parameter +list used to define the function must include an argument”n = +n”, but do not include +nn +in the definition as part of defData or defDataAdd.

gamma

A gamma distribution is a continuous data distribution that takes on non-negative values. The formula specifies the -mean \(\mu\) (with the ‘identity’ link) -or the log of the mean (with the ‘log’ link). The variance -field represents a dispersion value \(d\). The variance \(\sigma^2\) is is \(d \mu^2\).

+mean +μ\mu +(with the ‘identity’ link) or the log of the mean (with the ‘log’ link). +The variance field represents a dispersion value +dd. +The variance +σ2\sigma^2 +is is +dμ2d \mu^2.

nonrandom @@ -809,37 +899,57 @@

nonrandomnormal

A normal or Gaussian distribution is a continuous -data distribution that takes on values between \(-\infty\) and \(\infty\). The formula -represents the mean \(\mu\) and the -variance represents \(\sigma^2\). The link field is -not applied to the normal distribution.

+data distribution that takes on values between +-\infty +and +\infty. +The formula represents the mean +μ\mu +and the variance represents +σ2\sigma^2. +The link field is not applied to the normal +distribution.

noZeroPoisson

The noZeroPoisson distribution is a discrete data distribution that takes on positive integers. This is a truncated -poisson distribution that excludes \(0\). The formula specifies the -parameter \(\lambda\) (link is -‘identity’) or log() (link is log). The +poisson distribution that excludes +00. +The formula specifies the parameter +λ\lambda +(link is ‘identity’) or log() (link is log). The variance field does not apply to this distribution. The -mean \(\mu\) of this distribution is -\(\lambda/(1-e^{-\lambda})\) and the -variance \(\sigma^2\) is \((\lambda + \lambda^2)/(1-e^{-\lambda}) - -\lambda^2/(1-e^{-\lambda})^2\). We are not typically interested -in modeling data drawn from this distribution (except in the case of a -hurdle model), but it is useful to generate positive count data -where it is not desirable to have any \(0\) values.

+mean +μ\mu +of this distribution is +λ/(1eλ)\lambda/(1-e^{-\lambda}) +and the variance +σ2\sigma^2 +is +(λ+λ2)/(1eλ)λ2/(1eλ)2(\lambda + \lambda^2)/(1-e^{-\lambda}) - \lambda^2/(1-e^{-\lambda})^2. +We are not typically interested in modeling data drawn from this +distribution (except in the case of a hurdle model), but it is +useful to generate positive count data where it is not desirable to have +any +00 +values.

poisson

The poisson distribution is a discrete data distribution that takes on non-negative integers. The formula specifies -the mean \(\lambda\) (link is -‘identity’) or log of the mean (link is log). The +the mean +λ\lambda +(link is ‘identity’) or log of the mean (link is log). The variance field does not apply to this distribution. The -variance \(\sigma^2\) is \(\lambda\) itself.

+variance +σ2\sigma^2 +is +λ\lambda +itself.

trtAssign @@ -864,23 +974,37 @@

trtAssignuniform

A uniform distribution is a continuous data distribution -that takes on values from \(a\) to -\(b\), where \(b\) > \(a\), and they both lie anywhere on the real -number line. The formula is a string with the format “a;b”, -where a and b are scalars or functions of previously -defined variables. The variance and link -arguments do not apply to the uniform distribution.

+that takes on values from +aa +to +bb, +where +bb +> +aa, +and they both lie anywhere on the real number line. The +formula is a string with the format “a;b”, where a +and b are scalars or functions of previously defined variables. +The variance and link arguments do not apply +to the uniform distribution.

uniformInt

A uniform integer distribution is a discrete data -distribution that takes on values from \(a\) to \(b\), where \(b\) > \(a\), and they both lie anywhere on the -integer number line. The formula is a string with the -format “a;b”, where a and b are scalars or functions -of previously defined variables. The variance and -link arguments do not apply to the uniform integer -distribution.

+distribution that takes on values from +aa +to +bb, +where +bb +> +aa, +and they both lie anywhere on the integer number line. The +formula is a string with the format “a;b”, where a +and b are scalars or functions of previously defined variables. +The variance and link arguments do not apply +to the uniform integer distribution.

@@ -987,8 +1111,12 @@

defCondition and addCondition\(y\) on \(x\) varies depending on the value of the -predictor \(x\):

+

In this example, the slope of a regression line of +yy +on +xx +varies depending on the value of the predictor +xx:

 d <- defData(varname = "x", formula = 0, variance = 9, dist = "normal")
 
@@ -1013,9 +1141,7 @@ 

defCondition and addCondition - -

+

@@ -1028,16 +1154,16 @@

defCondition and addCondition

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/spline.html b/articles/spline.html index 74b0a450..29ad04c6 100644 --- a/articles/spline.html +++ b/articles/spline.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Spline Data

- Source: vignettes/spline.Rmd + Source: vignettes/spline.Rmd
@@ -266,16 +265,16 @@

Spline Data

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/survival.html b/articles/survival.html index 0f03d3f8..c1cf2d4c 100644 --- a/articles/survival.html +++ b/articles/survival.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Survival Data

- Source: vignettes/survival.Rmd + Source: vignettes/survival.Rmd
@@ -143,24 +142,33 @@

Survival Data

Weibull distribution

The density, mean, and variance of the Weibull distribution that is -used in the data generation process are defined by the parameters \(\lambda\) (scale) and \(\nu\) (shape) as shown below.

-\[\begin{aligned} -f(t) &= \frac{t^{\frac{1}{\nu}-1}}{\lambda \nu} -exp\left(-\frac{t^\frac{1}{\nu}}{\lambda}\right) \\ +used in the data generation process are defined by the parameters +λ\lambda +(scale) and +ν\nu +(shape) as shown below.

+f(t)=t1ν1λνexp(t1νλ)E(T)=λνΓ(ν+1)Var(T)=(λ2)ν(Γ(2ν+1)Γ2(ν+1))\begin{aligned} +f(t) &= \frac{t^{\frac{1}{\nu}-1}}{\lambda \nu} exp\left(-\frac{t^\frac{1}{\nu}}{\lambda}\right) \\ E(T) &= \lambda ^ \nu \Gamma(\nu + 1) \\ -Var(T) &= (\lambda^2)^\nu \left( \Gamma(2 \nu + 1) - \Gamma^2(\nu + -1) \right) \\ -\end{aligned}\]
-


-

The survival time \(T\) data are -generated based on this formula:

-

\[ +Var(T) &= (\lambda^2)^\nu \left( \Gamma(2 \nu + 1) - \Gamma^2(\nu + 1) \right) \\ +\end{aligned}


+

The survival time +TT +data are generated based on this formula:

+

T=(log(U)λexp(βx))ν, T = \left( -\frac{log(U) \lambda}{exp(\beta ^ \prime x)} \right)^\nu, -\]

-

where \(U\) is a uniform random -variable between 0 and 1, \(\beta\) is -a vector of parameters in a Cox proportional hazard model, and \(x\) is a vector of covariates that impact -survival time. \(\lambda\) and \(\nu\) can also vary by covariates.

+

+

where +UU +is a uniform random variable between 0 and 1, +β\beta +is a vector of parameters in a Cox proportional hazard model, and +xx +is a vector of covariates that impact survival time. +λ\lambda +and +ν\nu +can also vary by covariates.

Generating standard survival data with censoring @@ -707,35 +715,45 @@

Generating standard su #jckzsswiti .gt_indent_5 { text-indent: 25px; } + +#jckzsswiti .katex-display { + display: inline-flex !important; + margin-bottom: 0.75em !important; +} + +#jckzsswiti div.Reactable > div.rt-table > div.rt-thead > div.rt-tr.rt-tr-group-header > div.rt-th-group:after { + height: 0px !important; +} - - + - - + - + - + +1HR = Hazard Ratio, CI = Confidence Interval +
Characteristic -log(HR)1 +Characteristic +log(HR)1 -95% CI1 + +95% CI1 p-valuep-value
x1 1.51.2, 1.81.2, 1.8
x2 -0.89-1.1, -0.64-1.1, -0.64
-1 HR = Hazard Ratio, CI = Confidence Interval

@@ -833,16 +851,28 @@

Introducing non-proportional hazar survival/time-to-event outcomes that includes covariates that effect the hazard rate at various time points, the ratio of hazards comparing different levels of a covariate are constant across all time points. For -example, if we have a single binary covariate \(x\), the hazard \(\lambda(t)\) at time \(t\) is

-

\[\lambda(t|x) = \lambda_0(t) e ^ {\beta -x}\] where \(\lambda_0(t)\) is a -baseline hazard when \(x=0\). The ratio -of the hazards for \(x=1\) compared to -\(x=0\) is

-

\[\frac{\lambda_0(t) e ^ -{\beta}}{\lambda_0(t)} = e ^ \beta,\]

-

so the log of the hazard ratio is a constant \(\beta\), and the hazard ratio is always -\(e^\beta\).

+example, if we have a single binary covariate +xx, +the hazard +λ(t)\lambda(t) +at time +tt +is

+

λ(t|x)=λ0(t)eβx\lambda(t|x) = \lambda_0(t) e ^ {\beta x} +where +λ0(t)\lambda_0(t) +is a baseline hazard when +x=0x=0. +The ratio of the hazards for +x=1x=1 +compared to +x=0x=0 +is

+

λ0(t)eβλ0(t)=eβ,\frac{\lambda_0(t) e ^ {\beta}}{\lambda_0(t)} = e ^ \beta,

+

so the log of the hazard ratio is a constant +β\beta, +and the hazard ratio is always +eβe^\beta.

However, we may not always want to make the assumption that the hazard ratio is constant over all time periods. To facilitate this, it is possible to specify two different data definitions for the same @@ -1308,35 +1338,47 @@

Introducing non-proportional hazar #eiwwkhukjn .gt_indent_5 { text-indent: 25px; } + +#eiwwkhukjn .katex-display { + display: inline-flex !important; + margin-bottom: 0.75em !important; +} + +#eiwwkhukjn div.Reactable > div.rt-table > div.rt-thead > div.rt-tr.rt-tr-group-header > div.rt-th-group:after { + height: 0px !important; +} - - + - - + - + +1HR = Hazard Ratio, CI = Confidence Interval +
Characteristic -log(HR)1 +Characteristic +log(HR)1 -95% CI1 + +95% CI1 p-valuep-value
x -0.69-0.88, -0.50-0.88, -0.50
-1 HR = Hazard Ratio, CI = Confidence Interval

We can test the assumption of proportional hazards using weighted -residuals. If the \(\text{p-value} < -0.05\), then we would conclude that the assumption of -proportional hazards is not warranted. In this case \(p = 0.22\), so the model is apparently -reasonable:

+residuals. If the +p-value<0.05\text{p-value} < 0.05, +then we would conclude that the assumption of proportional hazards is +not warranted. In this case +p=0.22p = 0.22, +so the model is apparently reasonable:

 cox.zph(coxfit)
##        chisq df    p
@@ -1344,9 +1386,11 @@ 

Introducing non-proportional hazar ## GLOBAL 2.61 1 0.11


Non-constant/non-proportional hazard ratio

-

In this next case, the risk of death when \(x=1\) is lower at all time points compared -to when \(x=0\), but the relative risk -(or hazard ratio) changes at 150 days:

+

In this next case, the risk of death when +x=1x=1 +is lower at all time points compared to when +x=0x=0, +but the relative risk (or hazard ratio) changes at 150 days:

 def <- defData(varname = "x", formula = 0.4, dist = "binary")
 
@@ -1359,10 +1403,12 @@ 

Introducing non-proportional hazar dd <- genSurv(dd, defS, digits = 2, timeName = "time", censorName = "censor") fit <- survfit(Surv(time, event) ~ x, data = dd)

-

The survival curve for the sample with \(x=1\) has a slightly different shape under -this data generation process compared to the previous example under a -constant hazard ratio assumption; there is more separation early on -(prior to day 150), and then the gap is closed at a quicker rate.

+

The survival curve for the sample with +x=1x=1 +has a slightly different shape under this data generation process +compared to the previous example under a constant hazard ratio +assumption; there is more separation early on (prior to day 150), and +then the gap is closed at a quicker rate.

If we ignore the possibility that there might be a different relationship over time, the Cox proportional hazards model gives an @@ -1808,33 +1854,44 @@

Introducing non-proportional hazar #mwzpgwcgvd .gt_indent_5 { text-indent: 25px; } + +#mwzpgwcgvd .katex-display { + display: inline-flex !important; + margin-bottom: 0.75em !important; +} + +#mwzpgwcgvd div.Reactable > div.rt-table > div.rt-thead > div.rt-tr.rt-tr-group-header > div.rt-th-group:after { + height: 0px !important; +} - - + - - + - + +1HR = Hazard Ratio, CI = Confidence Interval +
Characteristic -log(HR)1 +Characteristic +log(HR)1 -95% CI1 + +95% CI1 p-valuep-value
x -0.72-0.91, -0.52-0.91, -0.52
-1 HR = Hazard Ratio, CI = Confidence Interval

However, further inspection of the proportionality assumption should -make us question the appropriateness of the model. Since \(p<0.05\), we would be wise to see if we -can improve on the model.

+make us question the appropriateness of the model. Since +p<0.05p<0.05, +we would be wise to see if we can improve on the model.

 cox.zph(coxfit)
##        chisq df      p
@@ -2289,41 +2346,51 @@ 

Introducing non-proportional hazar #eqdtogdyct .gt_indent_5 { text-indent: 25px; } + +#eqdtogdyct .katex-display { + display: inline-flex !important; + margin-bottom: 0.75em !important; +} + +#eqdtogdyct div.Reactable > div.rt-table > div.rt-thead > div.rt-tr.rt-tr-group-header > div.rt-th-group:after { + height: 0px !important; +} - - + - - + - + - + - + +1HR = Hazard Ratio, CI = Confidence Interval +
Characteristic -log(HR)1 +Characteristic +log(HR)1 -95% CI1 + +95% CI1 p-valuep-value
x * strata(tgroup)



    x * tgroup=1 -1.3-1.7, -0.94-1.7, -0.94
    x * tgroup=2 -0.40-0.65, -0.16-0.65, -0.16 0.001
-1 HR = Hazard Ratio, CI = Confidence Interval
@@ -2387,16 +2454,16 @@

Generating parameters f

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/articles/treat_and_exposure.html b/articles/treat_and_exposure.html index f800a33f..bdc9849e 100644 --- a/articles/treat_and_exposure.html +++ b/articles/treat_and_exposure.html @@ -12,14 +12,13 @@ - - +
@@ -48,7 +47,7 @@
- +
@@ -117,7 +116,7 @@

Treatment and Exposure

- Source: vignettes/treat_and_exposure.Rmd + Source: vignettes/treat_and_exposure.Rmd
@@ -265,18 +264,40 @@

Stepped-wedge design\[Y_{ict} = \beta_0 + b_c + \beta_1 * t + -\beta_2*X_{ct} + e_{ict}\]

-

where \(Y_{ict}\) is the outcome for -individual \(i\) in cluster \(c\) in time period \(t\), \(b_c\) is a cluster-specific effect, \(X_{ct}\) is the intervention indicator that -has a value 1 during periods where the cluster is under the -intervention, and \(e_{ict}\) is the -individual-level effect. Both \(b_c\) -and \(e_{ict}\) are normally -distributed with mean 0 and variances \(\sigma^2_{b}\) and \(\sigma^2_{e}\), respectively. \(\beta_1\) is the time trend, and \(\beta_2\) is the intervention effect.

+

Yict=β0+bc+β1*t+β2*Xct+eictY_{ict} = \beta_0 + b_c + \beta_1 * t + \beta_2*X_{ct} + e_{ict}

+

where +YictY_{ict} +is the outcome for individual +ii +in cluster +cc +in time period +tt, +bcb_c +is a cluster-specific effect, +XctX_{ct} +is the intervention indicator that has a value 1 during periods where +the cluster is under the intervention, and +eicte_{ict} +is the individual-level effect. Both +bcb_c +and +eicte_{ict} +are normally distributed with mean 0 and variances +σb2\sigma^2_{b} +and +σe2\sigma^2_{e}, +respectively. +β1\beta_1 +is the time trend, and +β2\beta_2 +is the intervention effect.

We need to define the cluster-level variables (i.e. the cluster effect and the cluster size) as well as the individual specific outcome. -In this case each cluster will have 15 individuals per period, and \(\sigma^2_b = 0.20\). In addition, \(\sigma^2_e = 1.75\).

+In this case each cluster will have 15 individuals per period, and +σb2=0.20\sigma^2_b = 0.20. +In addition, +σe2=1.75\sigma^2_e = 1.75.

 library(simstudy)
 library(ggplot2)
@@ -351,9 +372,7 @@ 

Stepped-wedge design - -

+

@@ -366,16 +385,16 @@

Stepped-wedge design

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/authors.html b/authors.html index 833e1302..d438c5cf 100644 --- a/authors.html +++ b/authors.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -93,7 +93,7 @@

Authors and Citation

- +
-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/index.html b/index.html index f3d63a6b..f16d56cb 100644 --- a/index.html +++ b/index.html @@ -19,7 +19,7 @@ - +
@@ -48,7 +48,7 @@
- +
@@ -121,7 +121,7 @@

The simstudy package is a collection of functions that allow users to generate simulated data sets in order to explore modeling techniques or better understand data generating processes. The user defines the distributions of individual variables, specifies relationships between covariates and outcomes, and generates data based on these specifications. The final data sets can represent randomized control trials, repeated measure designs, cluster randomized trials, or naturally observed data processes. Other complexities that can be added include survival data, correlated data, factorial study designs, step wedge designs, and missing data processes.

Simulation using simstudy has two fundamental steps. The user (1) defines the data elements of a data set and (2) generates the data based on these definitions. Additional functionality exists to simulate observed or randomized treatment assignment/exposures, to create longitudinal/panel data, to create multi-level/hierarchical data, to create datasets with correlated variables based on a specified covariance structure, to merge datasets, to create data sets with missing data, and to create non-linear relationships with underlying spline curves.

-

The overarching philosophy of simstudy is to create data generating processes that mimic the typical models used to fit those types of data. So, the parameterization of some of the data generating processes may not follow the standard parameterizations for the specific distributions. For example, in simstudy gamma-distributed data are generated based on the specification of a mean μ (or log(μ)) and a dispersion \(d\), rather than shape α and rate β parameters that more typically characterize the gamma distribution. When we estimate the parameters, we are modeling μ (or some function of μ), so we should explicitly recover the simstudy parameters used to generate the model, thus illuminating the relationship between the underlying data generating processes and the models. For more details on the package, use cases, examples, and function reference see the documentation page.

+

The overarching philosophy of simstudy is to create data generating processes that mimic the typical models used to fit those types of data. So, the parameterization of some of the data generating processes may not follow the standard parameterizations for the specific distributions. For example, in simstudy gamma-distributed data are generated based on the specification of a mean μ (or log(μ)) and a dispersion dd, rather than shape α and rate β parameters that more typically characterize the gamma distribution. When we estimate the parameters, we are modeling μ (or some function of μ), so we should explicitly recover the simstudy parameters used to generate the model, thus illuminating the relationship between the underlying data generating processes and the models. For more details on the package, use cases, examples, and function reference see the documentation page.

Installation

@@ -231,16 +231,16 @@

Developers

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/news/index.html b/news/index.html index 074c82b9..3485415a 100644 --- a/news/index.html +++ b/news/index.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
- + +
+

New features

+
  • Added the ability to generate data from an empirical distribution by using new functions genDataDensity and addDataDensity.
  • +
  • The binary and binomial distributions can now accommodate a “log” link.
  • +
+
+

Minor fix

+
  • +addCorGen no longer requires all clusters to have the same size when using the rho and corstr arguments to define the correlation.
  • +
  • Fixed an issue that prevented functions defined outside the global namespace from being referenced in defData.
  • +
+
+
+

New features

-
  • added the option to specify a customized distribution in defData and defDataAdd by specifying dist = "custom". *addPeriods now includes a new argument periodVec that allows users to designate specific measurement time periods using vector.
  • +
    • Added the option to specify a customized distribution in defData and defDataAdd by specifying dist = "custom". *addPeriods now includes a new argument periodVec that allows users to designate specific measurement time periods using vector.

Minor fix

@@ -109,7 +123,7 @@

New features

@@ -131,7 +145,7 @@

New features

Major fix

-
  • Improved the random effect variance generation for function iccRE under the Poisson distribution. The current approach is based on the 2013 paper by Nakagawa & Schielzeth titled “A general and simple method for obtaining \(R^2\) from generalized linear mixed-effects models”
  • +
    • Improved the random effect variance generation for function iccRE under the Poisson distribution. The current approach is based on the 2013 paper by Nakagawa & Schielzeth titled “A general and simple method for obtaining R2R^2 from generalized linear mixed-effects models.”

Minor fix

@@ -151,16 +165,14 @@

Major fixes

Minor fixes

-
  • Fixed bug in genSpline -
  • +
    • Fixed bug in genSpline.

Minor fixes

-
  • Fixed bug in trtAssign -
  • +
    • Fixed bug in trtAssign.
@@ -178,7 +190,7 @@

New featuressimstudy 0.4.02022-01-20

New features

-
  • genOrdCat now supports non-proportional odds
  • +
    • genOrdCat now supports non-proportional odds.
    • Added functions defRepeat and defRepeatAdd to facilitate the definition of multiple variables that share identical data definitions.
@@ -370,15 +382,15 @@
- - + + diff --git a/pkgdown.yml b/pkgdown.yml index a96cc1aa..158480e0 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -1,6 +1,6 @@ pandoc: 3.1.11 -pkgdown: 2.0.9.9000 -pkgdown_sha: 41fae74bcfe4bcf642736fe2f9930e7ddf7acc41 +pkgdown: 2.1.0.9000 +pkgdown_sha: 6f01c9267a1cee263216cec38eea10017d751dd8 articles: clustered: clustered.html corelationmat: corelationmat.html @@ -15,5 +15,4 @@ articles: spline: spline.html survival: survival.html treat_and_exposure: treat_and_exposure.html -last_built: 2024-05-14T19:24Z - +last_built: 2024-07-29T16:55Z diff --git a/reference/addColumns.html b/reference/addColumns.html index 259837d2..bf86cfe9 100644 --- a/reference/addColumns.html +++ b/reference/addColumns.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,12 +104,14 @@

Add columns to existing data set

Arguments

-
dtDefs
-

name of definitions for added columns

+ + +
dtDefs
+

Name of definitions for added columns

dtOld
-

name of data table that is to be updated

+

Name of data table that is to be updated

envir
@@ -119,9 +121,7 @@

Arguments

Value

- - -

an updated data.table that contains the added simulated data

+

an updated data.table that contains the added simulated data

@@ -172,15 +172,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addCompRisk.html b/reference/addCompRisk.html index 9e4c89a5..c187a153 100644 --- a/reference/addCompRisk.html +++ b/reference/addCompRisk.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -113,7 +113,9 @@

Generating single competing risk survival variable

Arguments

-
dtName
+ + +
dtName

Name of complete data set to be updated

@@ -124,7 +126,7 @@

Arguments

timeName

A string to indicate the name of the combined competing risk -time-to-event outcome that reflects the minimum observed value of all +time-to-event outcome that reflects the minimum observed value of all time-to-event outcomes.

@@ -136,7 +138,7 @@

Arguments

eventName

The name of the new numeric/integer column representing the competing event outcomes. If censorName is specified, the integer value for -that event will be 0. Defaults to "event", but will be ignored +that event will be 0. Defaults to "event", but will be ignored if timeName is NULL.

@@ -157,9 +159,7 @@

Arguments

Value

- - -

An updated data table

+

An updated data table

@@ -205,15 +205,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addCondition.html b/reference/addCondition.html index bc69f9f6..5805e42f 100644 --- a/reference/addCondition.html +++ b/reference/addCondition.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Add a single column to existing data set based on a condition

Arguments

-
condDefs
+ + +
condDefs

Name of definitions for added column

@@ -123,9 +125,7 @@

Arguments

Value

- - -

An updated data.table that contains the added simulated data

+

An updated data.table that contains the added simulated data

@@ -179,15 +179,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addCorData.html b/reference/addCorData.html index cd891a81..56876884 100644 --- a/reference/addCorData.html +++ b/reference/addCorData.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -113,7 +113,9 @@

Add correlated data to existing data.table

Arguments

-
dtOld
+ + +
dtOld

Data table that is the new columns will be appended to.

@@ -159,9 +161,7 @@

Arguments

Value

- - -

The original data table with the additional correlated columns

+

The original data table with the additional correlated columns

@@ -243,15 +243,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addCorFlex.html b/reference/addCorFlex.html index 6e79aa6f..742a382f 100644 --- a/reference/addCorFlex.html +++ b/reference/addCorFlex.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -112,7 +112,9 @@

Create multivariate (correlated) data - for general distributions

Arguments

-
dt
+ + +
dt

Data table that will be updated.

@@ -151,9 +153,7 @@

Arguments

Value

- - -

data.table with added column(s) of correlated data

+

data.table with added column(s) of correlated data

@@ -272,15 +272,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addCorGen.html b/reference/addCorGen.html index 66ef3a5a..794aa86d 100644 --- a/reference/addCorGen.html +++ b/reference/addCorGen.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -117,10 +117,12 @@

Create multivariate (correlated) data - for general distributions

Arguments

-
dtOld
+ + +
dtOld

The data set that will be augmented. If the data set includes a single record per id, the new data table will be created as a "wide" data set. -If the original data set includes multiple records per id, the new data set will +If the original data set includes multiple records per id, the new data set will be in "long" format.

@@ -187,9 +189,38 @@

Arguments

Value

- - -

Original data.table with added column(s) of correlated data

+

Original data.table with added column(s) of correlated data

+
+
+

Details

+

The original data table can come in one of two formats: a single row +per idvar (where data are ungrouped) or multiple rows per idvar (in which +case the data are grouped or clustered). The structure of the arguments +depends on the format of the data.

+

In the case of ungrouped data, there are two ways to specify the number of +correlated variables and the covariance matrix. In approach (1), +nvars needs to be specified along with rho and corstr. +In approach (2), corMatrix may be specified by identifying a single square +n x n covariance matrix. The number of new variables generated for each +record will be n. If nvars, rho, +corstr, and corMatrix are all specified, the data will be +generated based on the information provided in the covariance matrix alone. +In both (1) and (2), the data will be returned in a wide format.

+

In the case of grouped data, where there are G groups, there are also two +ways to proceed. In both cases, +the number of new variables to be generated may vary by group, and will be determined by the +number of records in each group, \(n_i, i \in \{1,...,G\}\) (i.e., the number of records that share the same +value of idvar). nvars is not used in grouped data. +In approach (1), the arguments rho and corstr may both be specified +to determine the structure of the covariance +matrix. In approach (2), the argument corMatrix may be specified. +corMatrix can be a single matrix with dimensions \(n \ \text{x} \ n\) if +\(n_i = n\) for all i. However, if the sample sizes of each group vary +(i.e., \(n_i \ne n_j\) for some groups i and j), corMatrix must be a list +of covariance matrices with a length G; each +covariance matrix in the list will have dimensions +\(n_i \ \text{x} \ n_i, \ i \in \{1,...,G\}\). In the case of grouped data, the +new data will be returned in long format (i.e., one new column only).

References

@@ -199,154 +230,104 @@

References

Examples

-
# Wide example
+    
# Ungrouped data
+
+cMat <- genCorMat(nvars = 4, rho = .2, corstr = "ar1", nclusters = 1)
 
-def <- defData(varname = "xbase", formula = 5, variance = .4, dist = "gamma", id = "cid")
-def <- defData(def, varname = "lambda", formula = ".5 + .1*xbase", dist = "nonrandom", link = "log")
+def <-
+  defData(varname = "xbase", formula = 5, variance = .4, dist = "gamma") |>
+  defData(varname = "lambda", formula = ".5 + .1*xbase", dist = "nonrandom", link = "log") |>
+  defData(varname = "n", formula = 3, dist = "noZeroPoisson")
 
-dt <- genData(100, def)
+dd <- genData(101, def, id = "cid")
+
+## Specify with nvars, rho, and corstr
 
 addCorGen(
-  dtOld = dt, idvar = "cid", nvars = 3, rho = .7, corstr = "cs",
+  dtOld = dd, idvar = "cid", nvars = 3, rho = .7, corstr = "cs",
   dist = "poisson", param1 = "lambda"
 )
 #> Key: <cid>
-#>        cid      xbase   lambda    V1    V2    V3
-#>      <int>      <num>    <num> <num> <num> <num>
-#>   1:     1  2.4642568 2.109447     2     1     1
-#>   2:     2  8.1936629 3.741050     1     2     2
-#>   3:     3  1.6160127 1.937893     2     1     2
-#>   4:     4  3.9722516 2.452788     2     4     3
-#>   5:     5 10.4397726 4.683179     8     6     5
-#>   6:     6  7.3508223 3.438661     6     5     4
-#>   7:     7  1.4810482 1.911914     3     4     3
-#>   8:     8  3.0380425 2.234024     1     2     3
-#>   9:     9  4.4484593 2.572417     3     5     3
-#>  10:    10 13.6883239 6.480725     4     3     5
-#>  11:    11  3.6067540 2.364757     1     1     1
-#>  12:    12  4.1298216 2.491742     3     3     4
-#>  13:    13 13.7080894 6.493547     8     7     6
-#>  14:    14  4.4396789 2.570159     1     1     2
-#>  15:    15  2.8759511 2.198104     3     4     3
-#>  16:    16  4.4737239 2.578924     3     2     4
-#>  17:    17  5.7267031 2.923175     7     6     7
-#>  18:    18  6.7779059 3.247192     3     2     0
-#>  19:    19  3.4677342 2.332110     1     2     1
-#>  20:    20  6.3256278 3.103600     4     4     1
-#>  21:    21 10.7763197 4.843473     6     5     7
-#>  22:    22  4.3980414 2.559480     3     3     2
-#>  23:    23 12.6469935 5.839816     9     7    10
-#>  24:    24  9.8634985 4.420929     3     3     3
-#>  25:    25  3.2791749 2.288548     0     0     1
-#>  26:    26 10.4382622 4.682472     5     3     5
-#>  27:    27  2.9614845 2.216986     1     2     2
-#>  28:    28  3.1274393 2.254085     0     0     0
-#>  29:    29  2.6008183 2.138451     2     2     2
-#>  30:    30  4.2985398 2.534139     0     1     2
-#>  31:    31  1.0480024 1.830886     2     1     2
-#>  32:    32  7.9487994 3.650558     2     1     1
-#>  33:    33  2.4069701 2.097397     2     2     1
-#>  34:    34  3.6267223 2.369484     5     4     6
-#>  35:    35  0.7099970 1.770036     3     1     3
-#>  36:    36  9.4241906 4.230918     2     2     3
-#>  37:    37  6.4557512 3.144249     4     3     3
-#>  38:    38  5.4615065 2.846672     4     3     4
-#>  39:    39  5.8155608 2.949265     4     3     4
-#>  40:    40  0.8072994 1.787343     6     7     7
-#>  41:    41  0.9346980 1.810259     1     1     2
-#>  42:    42  3.7364106 2.395618     5     4     3
-#>  43:    43  6.0328437 3.014049     2     2     2
-#>  44:    44  5.9584302 2.991704     5     7     6
-#>  45:    45 13.2317900 6.191510     5     6     8
-#>  46:    46  2.9934842 2.224091     4     2     2
-#>  47:    47  7.1365501 3.365764     4     5     6
-#>  48:    48  5.8253896 2.952165     2     3     3
-#>  49:    49  7.6082385 3.528327     3     8     7
-#>  50:    50  5.2812917 2.795830     2     3     3
-#>  51:    51  8.1163437 3.712236     3     3     4
-#>  52:    52  3.4026639 2.316984     2     1     0
-#>  53:    53  2.7215568 2.164427     2     2     2
-#>  54:    54  1.9450426 2.002716     2     4     4
-#>  55:    55  7.8624883 3.619185     3     4     4
-#>  56:    56  5.4523563 2.844069     2     1     3
-#>  57:    57  3.5858066 2.359809     3     4     4
-#>  58:    58  2.2780337 2.070527     4     1     3
-#>  59:    59  9.6094557 4.310033     2     4     1
-#>  60:    60  6.7936799 3.252318     2     4     3
-#>  61:    61  5.7704458 2.935990     3     4     1
-#>  62:    62  2.2547629 2.065715     3     3     3
-#>  63:    63  3.0224435 2.230541     1     1     2
-#>  64:    64  1.5492497 1.924998     3     3     4
-#>  65:    65  3.7727745 2.404345     6     2     5
-#>  66:    66  2.5417435 2.125856     0     1     1
-#>  67:    67  2.3152277 2.078243     2     4     2
-#>  68:    68  5.0499848 2.731903     3     5     3
-#>  69:    69  3.6461650 2.374095     2     2     2
-#>  70:    70  2.2636780 2.067557     3     3     2
-#>  71:    71  2.5688535 2.131627     0     1     0
-#>  72:    72  7.8104817 3.600412     8     4     5
-#>  73:    73  6.6842518 3.216923     2     2     2
-#>  74:    74  1.0662589 1.834232     4     5     3
-#>  75:    75  7.2695788 3.410838     2     0     1
-#>  76:    76  2.6207268 2.142713     1     0     1
-#>  77:    77  4.7302033 2.645924     2     3     2
-#>  78:    78  5.5652999 2.876373     5     3     2
-#>  79:    79  2.7205706 2.164214     4     4     3
-#>  80:    80  2.4543955 2.107368     3     3     2
-#>  81:    81  9.9679682 4.467356     5     6     3
-#>  82:    82  2.0058175 2.014925     1     0     1
-#>  83:    83  5.9661283 2.994008     2     4     4
-#>  84:    84  4.7669855 2.655674     5     4     6
-#>  85:    85  8.0750765 3.696948     5     6     5
-#>  86:    86  2.6939099 2.158451     0     0     0
-#>  87:    87  3.5888887 2.360536     1     2     2
-#>  88:    88  9.2578619 4.161128     1     2     3
-#>  89:    89  4.2890330 2.531731     0     0     0
-#>  90:    90  4.3271590 2.541402     3     3     4
-#>  91:    91  8.4518267 3.838888     5     4     7
-#>  92:    92  7.8935424 3.630441     4     4     4
-#>  93:    93  7.6636848 3.547945     9     8     3
-#>  94:    94 10.6989622 4.806149     6     7     8
-#>  95:    95  3.1599697 2.261429     3     5     5
-#>  96:    96  1.6077017 1.936283     1     1     4
-#>  97:    97  1.1727405 1.853868     4     3     3
-#>  98:    98  3.2515144 2.282226     5     7     4
-#>  99:    99  7.4732315 3.481012     4     3     5
-#> 100:   100  2.1563203 2.045479     3     3     3
-#>        cid      xbase   lambda    V1    V2    V3
-
-# Long example
+#>        cid     xbase   lambda     n    V1    V2    V3
+#>      <int>     <num>    <num> <num> <num> <num> <num>
+#>   1:     1  2.464257 2.109447     1     4     6     5
+#>   2:     2  8.193663 3.741050     1     2     2     4
+#>   3:     3  1.616013 1.937893     3     2     3     2
+#>   4:     4  3.972252 2.452788     3     6     3     4
+#>   5:     5 10.439773 4.683179     2     2     4     2
+#>  ---                                                 
+#>  97:    97  1.172740 1.853868     5     2     2     2
+#>  98:    98  3.251514 2.282226     3     1     0     2
+#>  99:    99  7.473232 3.481012     3     1     3     3
+#> 100:   100  2.156320 2.045479     2     6     3     3
+#> 101:   101  3.476790 2.334223     4     0     1     0
 
-def <- defData(varname = "xbase", formula = 5, variance = .4, dist = "gamma", id = "cid")
+## Specify with covMatrix
 
-def2 <- defDataAdd(
-  varname = "p", formula = "-3+.2*period + .3*xbase",
-  dist = "nonrandom", link = "logit"
+addCorGen(
+  dtOld = dd, idvar = "cid", corMatrix = cMat,
+  dist = "poisson", param1 = "lambda"
 )
+#> Key: <cid>
+#>        cid     xbase   lambda     n    V1    V2    V3    V4
+#>      <int>     <num>    <num> <num> <num> <num> <num> <num>
+#>   1:     1  2.464257 2.109447     1     2     2     2     0
+#>   2:     2  8.193663 3.741050     1    12     3     4     2
+#>   3:     3  1.616013 1.937893     3     2     1     2     0
+#>   4:     4  3.972252 2.452788     3     0     0     1     5
+#>   5:     5 10.439773 4.683179     2     1     5     5     5
+#>  ---                                                       
+#>  97:    97  1.172740 1.853868     5     3     3     2     0
+#>  98:    98  3.251514 2.282226     3     3     3     4     4
+#>  99:    99  7.473232 3.481012     3     2     1     5     3
+#> 100:   100  2.156320 2.045479     2     3     1     3     3
+#> 101:   101  3.476790 2.334223     4     2     2     1     2
+
+# Grouped data
 
-dt <- genData(100, def)
+cMats <- genCorMat(nvars = dd$n, rho = .5, corstr = "cs", nclusters = nrow(dd))
 
-dtLong <- addPeriods(dt, idvars = "cid", nPeriods = 3)
-dtLong <- addColumns(def2, dtLong)
+dx <- genCluster(dd, "cid", "n", "id")
+
+## Specify with nvars, rho, and corstr
 
 addCorGen(
-  dtOld = dtLong, idvar = "cid", nvars = NULL, rho = .7, corstr = "cs",
-  dist = "binary", param1 = "p"
+  dtOld = dx, idvar = "cid", rho = .8, corstr = "ar1", dist = "poisson", param1 = "xbase"
 )
 #> Key: <cid>
-#>        cid period    xbase timeID          p     X
-#>      <int>  <int>    <num>  <int>      <num> <num>
-#>   1:     1      0 6.346117      1 0.25045917     0
-#>   2:     1      1 6.346117      2 0.28983926     0
-#>   3:     1      2 6.346117      3 0.33266308     0
-#>   4:     2      0 1.573306      4 0.07391787     0
-#>   5:     2      1 1.573306      5 0.08882974     0
-#>  ---                                              
-#> 296:    99      1 2.734045    296 0.12134161     0
-#> 297:    99      2 2.734045    297 0.14432952     0
-#> 298:   100      0 6.992554    298 0.28859165     1
-#> 299:   100      1 6.992554    299 0.33131714     0
-#> 300:   100      2 6.992554    300 0.37701585     0
+#>        cid    xbase   lambda     n    id     X
+#>      <int>    <num>    <num> <num> <int> <num>
+#>   1:     1 2.464257 2.109447     1     1     2
+#>   2:     2 8.193663 3.741050     1     2     6
+#>   3:     3 1.616013 1.937893     3     3     1
+#>   4:     3 1.616013 1.937893     3     4     1
+#>   5:     3 1.616013 1.937893     3     5     1
+#>  ---                                          
+#> 299:   100 2.156320 2.045479     2   299     3
+#> 300:   101 3.476790 2.334223     4   300     7
+#> 301:   101 3.476790 2.334223     4   301     7
+#> 302:   101 3.476790 2.334223     4   302     8
+#> 303:   101 3.476790 2.334223     4   303     6
+
+## Specify with covMatrix
+
+addCorGen(
+ dtOld = dx, idvar = "cid", corMatrix = cMats, dist = "poisson", param1 = "xbase"
+)
+#> Key: <cid>
+#>        cid    xbase   lambda     n    id     X
+#>      <int>    <num>    <num> <num> <int> <num>
+#>   1:     1 2.464257 2.109447     1     1     5
+#>   2:     2 8.193663 3.741050     1     2    11
+#>   3:     3 1.616013 1.937893     3     3     0
+#>   4:     3 1.616013 1.937893     3     4     0
+#>   5:     3 1.616013 1.937893     3     5     2
+#>  ---                                          
+#> 299:   100 2.156320 2.045479     2   299     4
+#> 300:   101 3.476790 2.334223     4   300     6
+#> 301:   101 3.476790 2.334223     4   301     3
+#> 302:   101 3.476790 2.334223     4   302     3
+#> 303:   101 3.476790 2.334223     4   303     3
 
 
@@ -362,15 +343,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addDataDensity.html b/reference/addDataDensity.html new file mode 100644 index 00000000..aed33473 --- /dev/null +++ b/reference/addDataDensity.html @@ -0,0 +1,166 @@ + +Add data from a density defined by a vector of integers — addDataDensity • simstudy + + +
+
+ + + +
+
+ + +
+

Data are generated from an a density defined by a vector of integers.

+
+ +
+
addDataDensity(dtOld, dataDist, varname, uselimits = FALSE)
+
+ +
+

Arguments

+ + +
dtOld
+

Name of data table that is to be updated.

+ + +
dataDist
+

Vector that defines the desired density.

+ + +
varname
+

Name of variable name.

+ + +
uselimits
+

Indicator to use minimum and maximum of input data vector as +limits for sampling. Defaults to FALSE, in which case a smoothed density that +extends beyond the limits is used.

+ +
+
+

Value

+

A data table with the generated data.

+
+ +
+

Examples

+
def <- defData(varname = "x1", formula = 5, dist = "poisson")
+
+data_dist <- data_dist <- c(1, 2, 2, 3, 4, 4, 4, 5, 6, 6, 7, 7, 7, 8, 9, 10, 10)
+
+dd <- genData(500, def)
+dd <- addDataDensity(dd, data_dist, varname = "x2")
+dd <- addDataDensity(dd, data_dist, varname = "x3", uselimits = TRUE)
+
+
+
+ +
+ + +
+ + + + + + + + diff --git a/reference/addMarkov.html b/reference/addMarkov.html index b13ac64a..7ab05889 100644 --- a/reference/addMarkov.html +++ b/reference/addMarkov.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -117,7 +117,9 @@

Add Markov chain

Arguments

-
dd
+ + +
dd

data.table with a unique identifier

@@ -173,9 +175,7 @@

Arguments

Value

- - -

A data table with n rows if in wide format, or n by chainLen rows +

A data table with n rows if in wide format, or n by chainLen rows if in long format.

@@ -218,15 +218,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addMultiFac.html b/reference/addMultiFac.html index 5cea5e42..0a118435 100644 --- a/reference/addMultiFac.html +++ b/reference/addMultiFac.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Add multi-factorial data

Arguments

-
dtOld
+ + +
dtOld

data.table that is to be modified

@@ -132,9 +134,7 @@

Arguments

Value

- - -

A data.table that contains the added simulated data. Each new column contains +

A data.table that contains the added simulated data. Each new column contains an integer.

@@ -146,19 +146,19 @@

Examples

DT <- addMultiFac(DT, nFactors = 3, levels = c(2, 3, 3), colNames = c("A", "B", "C")) DT #> Key: <id> -#> id x A B C -#> <int> <num> <int> <int> <int> -#> 1: 1 0.4456463 1 3 2 -#> 2: 2 -1.3738458 1 3 2 -#> 3: 3 0.7336057 2 2 3 -#> 4: 4 0.5728606 1 2 1 -#> 5: 5 -0.1538775 1 1 3 -#> --- -#> 356: 356 0.6066500 1 1 1 -#> 357: 357 0.5467441 2 1 3 -#> 358: 358 0.6698458 2 3 1 -#> 359: 359 -0.4435307 1 1 1 -#> 360: 360 -0.4333330 2 1 3 +#> id x A B C +#> <int> <num> <int> <int> <int> +#> 1: 1 1.33037679 1 3 3 +#> 2: 2 -0.53549641 2 2 2 +#> 3: 3 -0.91155077 2 3 1 +#> 4: 4 -0.21853793 2 1 2 +#> 5: 5 0.93554171 2 3 2 +#> --- +#> 356: 356 -0.75060380 1 2 1 +#> 357: 357 -0.62399716 1 1 1 +#> 358: 358 -0.87007269 1 2 2 +#> 359: 359 -0.01525919 2 3 3 +#> 360: 360 -1.14390972 1 1 3 DT[, .N, keyby = .(A, B, C)] #> Key: <A, B, C> #> A B C N @@ -188,14 +188,14 @@

Examples

#> Key: <Var1, Var2, Var3> #> Var1 Var2 Var3 N #> <num> <num> <num> <int> -#> 1: 0 0 0 37 -#> 2: 0 0 1 38 -#> 3: 0 1 0 38 -#> 4: 0 1 1 37 -#> 5: 1 0 0 37 +#> 1: 0 0 0 38 +#> 2: 0 0 1 37 +#> 3: 0 1 0 37 +#> 4: 0 1 1 38 +#> 5: 1 0 0 38 #> 6: 1 0 1 38 #> 7: 1 1 0 37 -#> 8: 1 1 1 38 +#> 8: 1 1 1 37

@@ -210,15 +210,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addPeriods.html b/reference/addPeriods.html index f56cca47..0d9e9db2 100644 --- a/reference/addPeriods.html +++ b/reference/addPeriods.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -113,7 +113,9 @@

Create longitudinal/panel data

Arguments

-
dtName
+ + +
dtName

Name of existing data table

@@ -148,29 +150,27 @@

Arguments

Value

- - -

An updated data.table that that has multiple rows +

An updated data.table that that has multiple rows per observation in dtName

Details

-

It is possible to generate longitudinal data with varying -numbers of measurement periods as well as varying time intervals between -each measurement period. This is done by defining specific variables in the -data set that define the number of observations per subject and the average -interval time between each observation. nCount defines the number of -measurements for an individual; mInterval specifies the average time between -intervals for a subject; and vInterval specifies the variance of those +

It is possible to generate longitudinal data with varying +numbers of measurement periods as well as varying time intervals between +each measurement period. This is done by defining specific variables in the +data set that define the number of observations per subject and the average +interval time between each observation. nCount defines the number of +measurements for an individual; mInterval specifies the average time between +intervals for a subject; and vInterval specifies the variance of those interval times. If mInterval is not defined, no intervals are used. If vInterval is set to 0 or is not defined, the interval for -a subject is determined entirely by the mean interval. If vInterval is -greater than 0, time intervals are generated using a gamma distribution -with specified mean and dispersion. If either nPeriods or timevars -is specified, that will override any nCount, mInterval, and +a subject is determined entirely by the mean interval. If vInterval is +greater than 0, time intervals are generated using a gamma distribution +with specified mean and dispersion. If either nPeriods or timevars +is specified, that will override any nCount, mInterval, and vInterval data.

periodVec is used to specify measurement periods that are different -the default counting variables. If periodVec is not specified, -the periods default to 0, 1, ... n-1, with n periods. If +the default counting variables. If periodVec is not specified, +the periods default to 0, 1, ... n-1, with n periods. If periodVec is specified as c(x_1, x_2, ... x_n), then x_1, x_2, ... x_n represent the measurement periods.

@@ -187,11 +187,11 @@

Examples

#> Key: <id> #> id T Y0 Y1 Y2 #> <int> <int> <num> <num> <num> -#> 1: 1 0 9.811207 16.45975 19.21951 -#> 2: 2 1 10.312756 19.55707 24.12884 -#> 3: 3 1 10.074717 19.78654 26.27507 -#> 4: 4 0 9.395655 13.66550 18.48758 -#> 5: 5 1 10.045247 20.56940 23.49643 +#> 1: 1 0 9.038907 13.77221 18.97153 +#> 2: 2 0 9.402125 13.70966 19.96253 +#> 3: 3 1 7.979656 18.63066 24.97564 +#> 4: 4 1 9.500015 20.69839 24.42393 +#> 5: 5 1 12.331997 22.71072 28.14005 dtTime <- addPeriods(dtTrial, nPeriods = 3, idvars = "id", @@ -201,21 +201,21 @@

Examples

#> Key: <timeID> #> id period T Y timeID #> <int> <int> <int> <num> <int> -#> 1: 1 0 0 9.811207 1 -#> 2: 1 1 0 16.459747 2 -#> 3: 1 2 0 19.219507 3 -#> 4: 2 0 1 10.312756 4 -#> 5: 2 1 1 19.557073 5 -#> 6: 2 2 1 24.128844 6 -#> 7: 3 0 1 10.074717 7 -#> 8: 3 1 1 19.786538 8 -#> 9: 3 2 1 26.275070 9 -#> 10: 4 0 0 9.395655 10 -#> 11: 4 1 0 13.665505 11 -#> 12: 4 2 0 18.487584 12 -#> 13: 5 0 1 10.045247 13 -#> 14: 5 1 1 20.569395 14 -#> 15: 5 2 1 23.496429 15 +#> 1: 1 0 0 9.038907 1 +#> 2: 1 1 0 13.772206 2 +#> 3: 1 2 0 18.971532 3 +#> 4: 2 0 0 9.402125 4 +#> 5: 2 1 0 13.709659 5 +#> 6: 2 2 0 19.962525 6 +#> 7: 3 0 1 7.979656 7 +#> 8: 3 1 1 18.630661 8 +#> 9: 3 2 1 24.975640 9 +#> 10: 4 0 1 9.500015 10 +#> 11: 4 1 1 20.698390 11 +#> 12: 4 2 1 24.423926 12 +#> 13: 5 0 1 12.331997 13 +#> 14: 5 1 1 22.710725 14 +#> 15: 5 2 1 28.140052 15 # Varying # of periods and intervals - need to have variables # called nCount and mInterval @@ -230,21 +230,31 @@

Examples

#> Key: <id> #> id xbase nCount mInterval vInterval #> <int> <num> <num> <num> <num> -#> 1: 8 19.79319 2 29.97654 0.07 -#> 2: 121 21.54566 5 30.11969 0.07 +#> 1: 8 19.52645 11 29.03541 0.07 +#> 2: 121 18.53334 6 31.91971 0.07 dtPeriod <- addPeriods(dt) dtPeriod[id %in% c(8, 121)] # View individuals 8 and 121 only #> Key: <timeID> -#> id period xbase time timeID -#> <int> <int> <num> <num> <int> -#> 1: 8 0 19.79319 0 38 -#> 2: 8 1 19.79319 33 39 -#> 3: 121 0 21.54566 0 758 -#> 4: 121 1 21.54566 24 759 -#> 5: 121 2 21.54566 56 760 -#> 6: 121 3 21.54566 89 761 -#> 7: 121 4 21.54566 127 762 +#> id period xbase time timeID +#> <int> <int> <num> <num> <int> +#> 1: 8 0 19.52645 0 43 +#> 2: 8 1 19.52645 25 44 +#> 3: 8 2 19.52645 58 45 +#> 4: 8 3 19.52645 78 46 +#> 5: 8 4 19.52645 114 47 +#> 6: 8 5 19.52645 138 48 +#> 7: 8 6 19.52645 172 49 +#> 8: 8 7 19.52645 208 50 +#> 9: 8 8 19.52645 235 51 +#> 10: 8 9 19.52645 272 52 +#> 11: 8 10 19.52645 294 53 +#> 12: 121 0 18.53334 0 744 +#> 13: 121 1 18.53334 24 745 +#> 14: 121 2 18.53334 65 746 +#> 15: 121 3 18.53334 94 747 +#> 16: 121 4 18.53334 129 748 +#> 17: 121 5 18.53334 175 749
@@ -259,15 +269,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/addSynthetic.html b/reference/addSynthetic.html index 2925b2d7..6ce3517b 100644 --- a/reference/addSynthetic.html +++ b/reference/addSynthetic.html @@ -1,10 +1,10 @@ -Add synthetic data to existing data set — addSynthetic • simstudyAdd synthetic data to existing data set — addSynthetic • simstudy - +
@@ -32,7 +32,7 @@
- +
-

This function generates synthetic data from an existing +

This function generates synthetic data from an existing data.table and adds it to another (simstudy) data.table.

@@ -106,7 +106,9 @@

Add synthetic data to existing data set

Arguments

-
dtOld
+ + +
dtOld

data.table that is to be modified

@@ -126,9 +128,7 @@

Arguments

Value

- - -

A data.table that contains the added synthetic data.

+

A data.table that contains the added synthetic data.

Details

@@ -171,15 +171,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/betaGetShapes.html b/reference/betaGetShapes.html index 89317f3e..ad6b0be8 100644 --- a/reference/betaGetShapes.html +++ b/reference/betaGetShapes.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Convert beta mean and precision parameters to two shape parameters

Arguments

-
mean
+ + +
mean

The mean of a beta distribution

@@ -114,9 +116,7 @@

Arguments

Value

- - -

A list that includes the shape parameters of the beta distribution

+

A list that includes the shape parameters of the beta distribution

Details

@@ -160,15 +160,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/blockDecayMat.html b/reference/blockDecayMat.html index df615926..33009c54 100644 --- a/reference/blockDecayMat.html +++ b/reference/blockDecayMat.html @@ -1,13 +1,13 @@ -Create a block correlation matrix — blockDecayMat • simstudyCreate a block correlation matrix — blockDecayMat • simstudy - +
@@ -35,7 +35,7 @@
- +
-

The function genBlockMat() generates correlation matrices that -can accommodate clustered observations over time where the within-cluster -between-individual correlation in the same time period can be different from the +

The function genBlockMat() generates correlation matrices that +can accommodate clustered observations over time where the within-cluster +between-individual correlation in the same time period can be different from the within-cluster between-individual correlation across time periods.The matrix generated here can be used in function addCorGen().

@@ -112,7 +112,9 @@

Create a block correlation matrix

Arguments

-
ninds
+ + +
ninds

The number of units (individuals) in each cluster in each period.

@@ -139,28 +141,24 @@

Arguments

Value

- - -

A single correlation matrix of size nvars x nvars, or a list of matrices of potentially +

A single correlation matrix of size nvars x nvars, or a list of matrices of potentially different sizes with length indicated by nclusters.

- -

A single correlation matrix or a list of matrices of potentially different sizes with length indicated by nclusters.

Details

-

Two general decay correlation structures are currently supported: a *cross-sectional* -exchangeable structure and a *closed cohort* exchangeable structure. In the *cross-sectional* -case, individuals or units in each time period are distinct. In the *closed cohort* structure, -individuals or units are repeated in each time period. The desired structure is specified +

Two general decay correlation structures are currently supported: a *cross-sectional* +exchangeable structure and a *closed cohort* exchangeable structure. In the *cross-sectional* +case, individuals or units in each time period are distinct. In the *closed cohort* structure, +individuals or units are repeated in each time period. The desired structure is specified using pattern, which defaults to "xsection" if not specified.

-

This function can generate correlation matrices of different sizes, depending on the -combination of arguments provided. A single matrix will be generated when +

This function can generate correlation matrices of different sizes, depending on the +combination of arguments provided. A single matrix will be generated when nclusters == 1 (the default), and a list of matrices of matrices will be generated when nclusters > 1.

If nclusters > 1, the length of ninds will depend on if sample sizes will vary by cluster -and/or period. There are three scenarios, and function evaluates the length of ninds to +and/or period. There are three scenarios, and function evaluates the length of ninds to determine which approach to take:

  • if the sample size is the same for all clusters in all periods, ninds will be a single value (i.e., length = 1).

  • @@ -170,14 +168,14 @@

    Details

    value for each cluster-period combination (i.e., length = nclusters x nperiods). This option is only valid when pattern = "xsection".

In addition, rho_w and r can be specified as a single value (in which case they are consistent -across all clusters) or as a vector of length nclusters, in which case either one or +across all clusters) or as a vector of length nclusters, in which case either one or both of these parameters can vary by cluster.

See vignettes for more details.

References

-

Li et al. Mixed-effects models for the design and analysis of stepped wedge -cluster randomized trials: An overview. Statistical Methods in Medical Research. +

Li et al. Mixed-effects models for the design and analysis of stepped wedge +cluster randomized trials: An overview. Statistical Methods in Medical Research. 2021;30(2):612-639. doi:10.1177/0962280220932962

@@ -296,15 +294,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/blockExchangeMat.html b/reference/blockExchangeMat.html index eb0cebf7..f1ba68b8 100644 --- a/reference/blockExchangeMat.html +++ b/reference/blockExchangeMat.html @@ -1,13 +1,13 @@ -Create a block correlation matrix with exchangeable structure — blockExchangeMat • simstudyCreate a block correlation matrix with exchangeable structure — blockExchangeMat • simstudy - +
@@ -35,7 +35,7 @@
- +
-

The function blockExchangeMat generates exchangeable correlation matrices that -can accommodate clustered observations over time where the within-cluster -between-individual correlation in the same time period can be different from the +

The function blockExchangeMat generates exchangeable correlation matrices that +can accommodate clustered observations over time where the within-cluster +between-individual correlation in the same time period can be different from the within-cluster between-individual correlation across time periods. The matrix generated here can be used in function addCorGen.

@@ -120,7 +120,9 @@

Create a block correlation matrix with exchangeable structure

Arguments

-
ninds
+ + +
ninds

The number of units (individuals) in each cluster in each period.

@@ -151,23 +153,21 @@

Arguments

Value

- - -

A single correlation matrix or a list of matrices of potentially +

A single correlation matrix or a list of matrices of potentially different sizes with length indicated by nclusters.

Details

Two general exchangeable correlation structures are currently supported: a *cross-sectional* exchangeable structure and a *closed cohort* exchangeable structure. In the *cross-sectional* case, individuals or units in each time period are distinct. -In the *closed cohort* structure, individuals or units are repeated in each time period. -The desired structure is specified using pattern, which defaults to "xsection" if not specified. rho_a is the within-individual/unit +In the *closed cohort* structure, individuals or units are repeated in each time period. +The desired structure is specified using pattern, which defaults to "xsection" if not specified. rho_a is the within-individual/unit exchangeable correlation over time, and can only be used when xsection = FALSE.

-

This function can generate correlation matrices of different sizes, depending on the combination of arguments provided. +

This function can generate correlation matrices of different sizes, depending on the combination of arguments provided. A single matrix will be generated when nclusters == 1 (the default), and a list of matrices of matrices will be generated when nclusters > 1.

If nclusters > 1, the length of ninds will depend on if sample sizes will vary by cluster -and/or period. There are three scenarios, and function evaluates the length of ninds to determine which approach +and/or period. There are three scenarios, and function evaluates the length of ninds to determine which approach to take:

  • if the sample size is the same for all clusters in all periods, ninds will be a single value (i.e., length = 1).

  • @@ -182,7 +182,7 @@

    Details

References

-

Li et al. Mixed-effects models for the design and analysis of stepped wedge cluster randomized trials: An overview. +

Li et al. Mixed-effects models for the design and analysis of stepped wedge cluster randomized trials: An overview. Statistical Methods in Medical Research. 2021;30(2):612-639. doi:10.1177/0962280220932962

@@ -325,15 +325,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/catProbs.html b/reference/catProbs.html index 28b5dfbe..0bf00eee 100644 --- a/reference/catProbs.html +++ b/reference/catProbs.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -115,15 +115,15 @@

Generate Categorical Formula

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defCondition.html b/reference/defCondition.html index 1c463616..b0f7df00 100644 --- a/reference/defCondition.html +++ b/reference/defCondition.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -113,7 +113,9 @@

Add single row to definitions table of conditions that will be used to add d

Arguments

-
dtDefs
+ + +
dtDefs

Name of definition table to be modified. Null if this is a new definition.

@@ -139,9 +141,7 @@

Arguments

Value

- - -

A data.table named dtName that is an updated data definitions table

+

A data.table named dtName that is an updated data definitions table

See also

@@ -212,15 +212,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defData.html b/reference/defData.html index 534f5720..98e28c32 100644 --- a/reference/defData.html +++ b/reference/defData.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -112,7 +112,9 @@

Add single row to definitions table

Arguments

-
dtDefs
+ + +
dtDefs

Definition data.table to be modified

@@ -142,9 +144,7 @@

Arguments

Value

- - -

A data.table named dtName that is an updated data definitions table

+

A data.table named dtName that is an updated data definitions table

Details

@@ -206,15 +206,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defDataAdd.html b/reference/defDataAdd.html index 0992396c..1741b9f4 100644 --- a/reference/defDataAdd.html +++ b/reference/defDataAdd.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -113,7 +113,9 @@

Add single row to definitions table that will be used to add data to an exis

Arguments

-
dtDefs
+ + +
dtDefs

Name of definition table to be modified. Null if this is a new definition.

@@ -139,9 +141,7 @@

Arguments

Value

- - -

A data.table named dtName that is an updated data definitions table

+

A data.table named dtName that is an updated data definitions table

See also

@@ -196,15 +196,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defMiss.html b/reference/defMiss.html index 4dd0de3c..c0600ea0 100644 --- a/reference/defMiss.html +++ b/reference/defMiss.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -111,7 +111,9 @@

Definitions for missing data

Arguments

-
dtDefs
+ + +
dtDefs

Definition data.table to be modified

@@ -141,9 +143,7 @@

Arguments

Value

- - -

A data.table named dtName that is an updated data definitions table

+

A data.table named dtName that is an updated data definitions table

See also

@@ -223,15 +223,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defRead.html b/reference/defRead.html index f602272a..61cf5a48 100644 --- a/reference/defRead.html +++ b/reference/defRead.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Read external csv data set definitions

Arguments

-
filen
+ + +
filen

String file name, including full path. Must be a csv file.

@@ -114,9 +116,7 @@

Arguments

Value

- - -

A data.table with data set definitions

+

A data.table with data set definitions

See also

@@ -176,15 +176,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defReadAdd.html b/reference/defReadAdd.html index 389ef1ce..b3ad8136 100644 --- a/reference/defReadAdd.html +++ b/reference/defReadAdd.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,15 +104,15 @@

Read external csv data set definitions for adding columns

Arguments

-
filen
+ + +
filen

String file name, including full path. Must be a csv file.

Value

- - -

A data.table with data set definitions

+

A data.table with data set definitions

See also

@@ -185,15 +185,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defReadCond.html b/reference/defReadCond.html index 24ac0c97..210074f3 100644 --- a/reference/defReadCond.html +++ b/reference/defReadCond.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,15 +104,15 @@

Read external csv data set definitions for adding columns

Arguments

-
filen
+ + +
filen

String file name, including full path. Must be a csv file.

Value

- - -

A data.table with data set definitions

+

A data.table with data set definitions

See also

@@ -205,15 +205,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defRepeat.html b/reference/defRepeat.html index c4327195..05ea33d3 100644 --- a/reference/defRepeat.html +++ b/reference/defRepeat.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -113,7 +113,9 @@

Add multiple (similar) rows to definitions table

Arguments

-
dtDefs
+ + +
dtDefs

Definition data.table to be modified

@@ -147,9 +149,7 @@

Arguments

Value

- - -

A data.table named dtName that is an updated data definitions table

+

A data.table named dtName that is an updated data definitions table

Details

@@ -201,15 +201,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defRepeatAdd.html b/reference/defRepeatAdd.html index ca16249f..84c73617 100644 --- a/reference/defRepeatAdd.html +++ b/reference/defRepeatAdd.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -115,7 +115,9 @@

Add multiple (similar) rows to definitions table that will be used to add da

Arguments

-
dtDefs
+ + +
dtDefs

Definition data.table to be modified

@@ -149,9 +151,7 @@

Arguments

Value

- - -

A data.table named dtName that is an updated data definitions table

+

A data.table named dtName that is an updated data definitions table

Details

@@ -203,15 +203,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/defSurv.html b/reference/defSurv.html index b70add8c..3dc5cb40 100644 --- a/reference/defSurv.html +++ b/reference/defSurv.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -111,7 +111,9 @@

Add single row to survival definitions

Arguments

-
dtDefs
+ + +
dtDefs

Definition data.table to be modified

@@ -140,9 +142,7 @@

Arguments

Value

- - -

A data.table named dtName that is an updated data definitions table

+

A data.table named dtName that is an updated data definitions table

@@ -201,15 +201,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/delColumns.html b/reference/delColumns.html index 94b64b1c..a008ccb8 100644 --- a/reference/delColumns.html +++ b/reference/delColumns.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Delete columns from existing data set

Arguments

-
dtOld
+ + +
dtOld

Name of data table that is to be updated.

@@ -114,9 +116,7 @@

Arguments

Value

- - -

An updated data.table without vars.

+

An updated data.table without vars.

@@ -173,15 +173,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/distributions.html b/reference/distributions.html index 4990556e..5f38d15c 100644 --- a/reference/distributions.html +++ b/reference/distributions.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -103,7 +103,9 @@

Distributions for Data Definitions

Arguments

-
formula
+ + +
formula

Desired mean as a Number or an R expression for mean as a String. Variables defined via defData() and variables within the parent environment (prefixed with ..) can be used within the formula. @@ -124,7 +126,7 @@

Details

For details about the statistical distributions please see stats::distributions, any non-statistical distributions will be explained below. Required variables and expected pattern for each -distribution can be found in this table:

nameformulaformatvariancelink
betameanString or Numberdispersion valueidentity or logit
binaryprobability for 1String or NumberNAidentity or logit
binomialprobability of successString or Numbernumber of trialsidentity or logit
categoricalprobabilitiesp_1;p_2;..;p_ncategory labels: a;b;c , 50;130;20identity or logit
exponentialmean (lambda)String or NumberNAidentity or log
gammameanString or Numberdispersion valueidentity or log
mixtureformulax_1 |p_1 + x_2|p_2 ... x_n| p_nNANA
negBinomialmeanString or Numberdispersion valueidentity or log
nonrandomformulaString or NumberNANA
normalmeanString or NumbervarianceNA
noZeroPoissonmeanString or NumberNAidentity or log
poissonmeanString or NumberNAidentity or log
trtAssignratior_1;r_2;..;r_nstratificationidentity or nonbalanced
uniformrangefrom;toNANA
uniformIntrangefrom;toNANA
+distribution can be found in this table:

nameformulaformatvariancelink
betameanString or Numberdispersion valueidentity or logit
binaryprobability for 1String or NumberNAidentity, log, or logit
binomialprobability of successString or Numbernumber of trialsidentity, log, or logit
categoricalprobabilitiesp_1;p_2;..;p_ncategory labels: a;b;c , 50;130;20identity or logit
customname of functionStringargumentsidentity
exponentialmean (lambda)String or NumberNAidentity or log
gammameanString or Numberdispersion valueidentity or log
mixtureformulax_1 |p_1 + x_2|p_2 ... x_n| p_nNANA
negBinomialmeanString or Numberdispersion valueidentity or log
nonrandomformulaString or NumberNANA
normalmeanString or NumbervarianceNA
noZeroPoissonmeanString or NumberNAidentity or log
poissonmeanString or NumberNAidentity or log
trtAssignratior_1;r_2;..;r_nstratificationidentity or nonbalanced
uniformrangefrom;toNANA
uniformIntrangefrom;toNANA

Mixture

The mixture distribution makes it possible to mix to @@ -165,15 +167,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/gammaGetShapeRate.html b/reference/gammaGetShapeRate.html index 85c142dc..05f4b34c 100644 --- a/reference/gammaGetShapeRate.html +++ b/reference/gammaGetShapeRate.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Convert gamma mean and dispersion parameters to shape and rate parameters

Arguments

-
mean
+ + +
mean

The mean of a gamma distribution

@@ -114,9 +116,7 @@

Arguments

Value

- - -

A list that includes the shape and rate parameters of the gamma distribution

+

A list that includes the shape and rate parameters of the gamma distribution

Details

@@ -155,15 +155,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genCatFormula.html b/reference/genCatFormula.html index 063413d0..e8626c46 100644 --- a/reference/genCatFormula.html +++ b/reference/genCatFormula.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -106,7 +106,9 @@

Generate Categorical Formula

Arguments

-
...
+ + +
...

one or more numeric values to be concatenated, delimited by ";".

@@ -117,9 +119,7 @@

Arguments

Value

- - -

string with multinomial probabilities.

+

string with multinomial probabilities.

Details

@@ -162,15 +162,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genCluster.html b/reference/genCluster.html index 42f60bad..734949a4 100644 --- a/reference/genCluster.html +++ b/reference/genCluster.html @@ -5,7 +5,7 @@ - +
@@ -33,7 +33,7 @@
- +
@@ -108,7 +108,9 @@

Simulate clustered data

Arguments

-
dtClust
+ + +
dtClust

Name of existing data set that contains the level "2" data

@@ -134,9 +136,7 @@

Arguments

Value

- - -

A simulated data table with level "1" data

+

A simulated data table with level "1" data

@@ -209,15 +209,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genCorData.html b/reference/genCorData.html index 875c9bb5..b86a7c5b 100644 --- a/reference/genCorData.html +++ b/reference/genCorData.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -113,7 +113,9 @@

Create correlated data

Arguments

-
n
+ + +
n

Number of observations

@@ -156,9 +158,7 @@

Arguments

Value

- - -

A data.table with n rows and the k + 1 columns, where k is the number of +

A data.table with n rows and the k + 1 columns, where k is the number of means in the vector mu.

@@ -238,15 +238,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genCorFlex.html b/reference/genCorFlex.html index 4f5f6dbf..d7921868 100644 --- a/reference/genCorFlex.html +++ b/reference/genCorFlex.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Create multivariate (correlated) data - for general distributions

Arguments

-
n
+ + +
n

Number of observations

@@ -139,14 +141,12 @@

Arguments

Value

- - -

data.table with added column(s) of correlated data

+

data.table with added column(s) of correlated data

Examples

-
if (FALSE) {
+    
if (FALSE) { # \dontrun{
 def <- defData(varname = "xNorm", formula = 0, variance = 4, dist = "normal")
 def <- defData(def, varname = "xGamma1", formula = 15, variance = 2, dist = "gamma")
 def <- defData(def, varname = "xBin", formula = 0.5, dist = "binary")
@@ -163,7 +163,7 @@ 

Examples

cor(dt[, -"id"], method = "kendall") var(dt[, -"id"]) apply(dt[, -"id"], 2, mean) -} +} # }
@@ -178,15 +178,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genCorGen.html b/reference/genCorGen.html index 37251489..97bf0b5b 100644 --- a/reference/genCorGen.html +++ b/reference/genCorGen.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -117,7 +117,9 @@

Create multivariate (correlated) data - for general distributions

Arguments

-
n
+ + +
n

Number of observations

@@ -184,9 +186,7 @@

Arguments

Value

- - -

data.table with added column(s) of correlated data

+

data.table with added column(s) of correlated data

References

@@ -580,15 +580,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genCorMat.html b/reference/genCorMat.html index b7edd900..67ad3940 100644 --- a/reference/genCorMat.html +++ b/reference/genCorMat.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Create a correlation matrix

Arguments

-
nvars
+ + +
nvars

number of rows and columns (i.e. number of variables) for correlation matrix. It can be a scalar or vector (see details).

@@ -119,8 +121,8 @@

Arguments

corstr
-

Correlation structure. Options include "cs" for a compound symmetry structure, "ar1" -for an autoregressive structure of order 1, "arx" for an autoregressive structure +

Correlation structure. Options include "cs" for a compound symmetry structure, "ar1" +for an autoregressive structure of order 1, "arx" for an autoregressive structure that has a general decay pattern, and "structured" that imposes a prescribed pattern between observation based on distance (see details).

@@ -131,14 +133,12 @@

Arguments

Value

- - -

A single correlation matrix of size nvars x nvars, or a list of matrices of potentially +

A single correlation matrix of size nvars x nvars, or a list of matrices of potentially different sizes with length indicated by nclusters.

Details

-

This function can generate correlation matrices randomly or deterministically, +

This function can generate correlation matrices randomly or deterministically, depending on the combination of arguments provided. A single matrix will be generated when nclusters == 1 (the default), and a list of matrices of matrices will be generated when nclusters > 1.

@@ -146,18 +146,18 @@

Details

`cors` is specified with length `choose(nvars, 2)` then `corstr` should not be specified as "structured". In this case the `cors` vector should be interpreted as the lower triangle of the correlation matrix, and is specified by reading down the columns. For example, if CM is the correlation matrix and -nvars = 3, then CM[2,1] = CM[1,2] = cors[1], CM[3,1] = CM[1,3] = cors[2], +nvars = 3, then CM[2,1] = CM[1,2] = cors[1], CM[3,1] = CM[1,3] = cors[2], and CM[3,2] = CM[2,3] = cors[3].

If the vector cors and rho are not specified, random correlation matrices are generated -based on the specified corstr. If the structure is "arx", then a random vector of +based on the specified corstr. If the structure is "arx", then a random vector of length nvars - 1 is randomly generated and sorted in descending order; the correlation matrix will be generated base on this set of structured correlations. If the structure is not specified -as "arx" then a random positive definite of dimensions nvars x nvars with no structural +as "arx" then a random positive definite of dimensions nvars x nvars with no structural assumptions is generated.

-

If cors is not specified but rho is specified, then a matrix with either a "cs" or "ar1" +

If cors is not specified but rho is specified, then a matrix with either a "cs" or "ar1" structure is generated.

If nclusters > 1, nvars can be of length 1 or nclusters. If it is of length 1, -each cluster will have correlation matrices with the same dimension. Likewise, if nclusters > 1, +each cluster will have correlation matrices with the same dimension. Likewise, if nclusters > 1, rho can be of length 1 or nclusters. If length of rho is 1, each cluster will have correlation matrices with the same autocorrelation.

@@ -249,15 +249,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genCorOrdCat.html b/reference/genCorOrdCat.html index 050f92a5..a2aadd5b 100644 --- a/reference/genCorOrdCat.html +++ b/reference/genCorOrdCat.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -124,15 +124,15 @@

Generate correlated ordinal categorical data

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genData.html b/reference/genData.html index ac8ed06b..93acad30 100644 --- a/reference/genData.html +++ b/reference/genData.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Calling function to simulate data

Arguments

-
n
+ + +
n

the number of observations required in the data set.

@@ -127,9 +129,7 @@

Arguments

Value

- - -

A data.table that contains the simulated data.

+

A data.table that contains the simulated data.

@@ -215,15 +215,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genDataDensity.html b/reference/genDataDensity.html new file mode 100644 index 00000000..9477452b --- /dev/null +++ b/reference/genDataDensity.html @@ -0,0 +1,196 @@ + +Generate data from a density defined by a vector of integers — genDataDensity • simstudy + + +
+
+ + + +
+
+ + +
+

Data are generated from an a density defined by a vector of integers

+
+ +
+
genDataDensity(n, dataDist, varname, uselimits = FALSE, id = "id")
+
+ +
+

Arguments

+ + +
n
+

Number of samples to draw from the density.

+ + +
dataDist
+

Vector that defines the desired density

+ + +
varname
+

Name of variable name

+ + +
uselimits
+

Indicator to use minimum and maximum of input data vector as +limits for sampling. Defaults to FALSE, in which case a smoothed density that +extends beyond the limits is used.

+ + +
id
+

A string specifying the field that serves as the record id. The +default field is "id".

+ +
+
+

Value

+

A data table with the generated data

+
+ +
+

Examples

+
data_dist <- data_dist <- c(1, 2, 2, 3, 4, 4, 4, 5, 6, 6, 7, 7, 7, 8, 9, 10, 10)
+
+genDataDensity(500, data_dist, varname = "x1", id = "id")
+#> Key: <id>
+#>         id         x1
+#>      <int>      <num>
+#>   1:     1 -0.5694137
+#>   2:     2  4.0542217
+#>   3:     3  7.6214794
+#>   4:     4  5.7418883
+#>   5:     5  8.5160695
+#>  ---                 
+#> 496:   496  9.3678336
+#> 497:   497  4.6014444
+#> 498:   498  3.8036096
+#> 499:   499  5.1058410
+#> 500:   500  9.3376967
+genDataDensity(500, data_dist, varname = "x1", uselimits = TRUE, id = "id")
+#> Key: <id>
+#>         id       x1
+#>      <int>    <num>
+#>   1:     1 4.790279
+#>   2:     2 3.189019
+#>   3:     3 7.262826
+#>   4:     4 6.379838
+#>   5:     5 5.273627
+#>  ---               
+#> 496:   496 7.036904
+#> 497:   497 5.153915
+#> 498:   498 4.036004
+#> 499:   499 8.700270
+#> 500:   500 9.034203
+
+
+
+ +
+ + +
+ + + + + + + + diff --git a/reference/genDummy.html b/reference/genDummy.html index 8e28ebab..3a468f85 100644 --- a/reference/genDummy.html +++ b/reference/genDummy.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Create dummy variables from a factor or integer variable

Arguments

-
dtName
+ + +
dtName

Data table with column

@@ -137,17 +139,17 @@

Examples

#> Key: <id> #> id cat x #> <int> <int> <num> -#> 1: 1 3 4.479586 -#> 2: 2 1 5.234123 -#> 3: 3 2 4.100791 -#> 4: 4 3 6.225520 -#> 5: 5 2 6.188477 +#> 1: 1 3 6.329660 +#> 2: 2 3 8.149867 +#> 3: 3 1 3.091388 +#> 4: 4 2 4.931580 +#> 5: 5 3 6.622817 #> --- -#> 196: 196 3 6.966713 -#> 197: 197 3 4.144871 -#> 198: 198 3 6.773981 -#> 199: 199 2 2.501137 -#> 200: 200 3 3.028021 +#> 196: 196 2 4.898831 +#> 197: 197 2 3.910139 +#> 198: 198 3 2.947997 +#> 199: 199 1 2.871879 +#> 200: 200 2 9.221231 dx <- genFactor(dx, "cat", labels = c("one", "two", "three"), replace = TRUE) dx <- genDummy(dx, varname = "fcat", sep = "_") @@ -156,17 +158,17 @@

Examples

#> Key: <id> #> id x fcat fcat_one fcat_two fcat_three #> <int> <num> <fctr> <int> <int> <int> -#> 1: 1 4.479586 three 0 0 1 -#> 2: 2 5.234123 one 1 0 0 -#> 3: 3 4.100791 two 0 1 0 -#> 4: 4 6.225520 three 0 0 1 -#> 5: 5 6.188477 two 0 1 0 +#> 1: 1 6.329660 three 0 0 1 +#> 2: 2 8.149867 three 0 0 1 +#> 3: 3 3.091388 one 1 0 0 +#> 4: 4 4.931580 two 0 1 0 +#> 5: 5 6.622817 three 0 0 1 #> --- -#> 196: 196 6.966713 three 0 0 1 -#> 197: 197 4.144871 three 0 0 1 -#> 198: 198 6.773981 three 0 0 1 -#> 199: 199 2.501137 two 0 1 0 -#> 200: 200 3.028021 three 0 0 1 +#> 196: 196 4.898831 two 0 1 0 +#> 197: 197 3.910139 two 0 1 0 +#> 198: 198 2.947997 three 0 0 1 +#> 199: 199 2.871879 one 1 0 0 +#> 200: 200 9.221231 two 0 1 0 # Second example: @@ -177,19 +179,19 @@

Examples

#> Key: <id> #> id arm arm.1 arm.2 arm.3 #> <int> <int> <int> <int> <int> -#> 1: 1 2 0 1 0 +#> 1: 1 3 0 0 1 #> 2: 2 3 0 0 1 #> 3: 3 1 1 0 0 -#> 4: 4 1 1 0 0 -#> 5: 5 2 0 1 0 -#> 6: 6 1 1 0 0 -#> 7: 7 2 0 1 0 +#> 4: 4 2 0 1 0 +#> 5: 5 1 1 0 0 +#> 6: 6 3 0 0 1 +#> 7: 7 1 1 0 0 #> 8: 8 2 0 1 0 #> 9: 9 2 0 1 0 -#> 10: 10 3 0 0 1 -#> 11: 11 3 0 0 1 +#> 10: 10 1 1 0 0 +#> 11: 11 2 0 1 0 #> 12: 12 3 0 0 1 -#> 13: 13 1 1 0 0 +#> 13: 13 2 0 1 0 #> 14: 14 1 1 0 0 #> 15: 15 3 0 0 1
@@ -206,15 +208,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genFactor.html b/reference/genFactor.html index 7d9b7a6c..9be9bf75 100644 --- a/reference/genFactor.html +++ b/reference/genFactor.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Create factor variable from an existing (non-double) variable

Arguments

-
dtName
+ + +
dtName

Data table with columns.

@@ -143,34 +145,34 @@

Examples

#> Key: <id> #> id cat x #> <int> <int> <num> -#> 1: 1 3 3.914439 -#> 2: 2 3 4.482196 -#> 3: 3 3 2.031990 -#> 4: 4 1 5.646096 -#> 5: 5 2 3.280469 +#> 1: 1 3 6.423834 +#> 2: 2 3 5.942421 +#> 3: 3 3 6.754430 +#> 4: 4 1 3.533844 +#> 5: 5 3 2.531178 #> --- -#> 196: 196 3 4.031803 -#> 197: 197 3 3.423400 -#> 198: 198 2 5.876663 -#> 199: 199 2 5.303516 -#> 200: 200 2 6.232819 +#> 196: 196 2 5.084109 +#> 197: 197 2 4.600363 +#> 198: 198 2 4.521599 +#> 199: 199 2 1.991064 +#> 200: 200 1 5.998426 dx <- genFactor(dx, "cat", labels = c("one", "two", "three")) dx #> Key: <id> #> id cat x fcat #> <int> <int> <num> <fctr> -#> 1: 1 3 3.914439 three -#> 2: 2 3 4.482196 three -#> 3: 3 3 2.031990 three -#> 4: 4 1 5.646096 one -#> 5: 5 2 3.280469 two +#> 1: 1 3 6.423834 three +#> 2: 2 3 5.942421 three +#> 3: 3 3 6.754430 three +#> 4: 4 1 3.533844 one +#> 5: 5 3 2.531178 three #> --- -#> 196: 196 3 4.031803 three -#> 197: 197 3 3.423400 three -#> 198: 198 2 5.876663 two -#> 199: 199 2 5.303516 two -#> 200: 200 2 6.232819 two +#> 196: 196 2 5.084109 two +#> 197: 197 2 4.600363 two +#> 198: 198 2 4.521599 two +#> 199: 199 2 1.991064 two +#> 200: 200 1 5.998426 one # Second example: @@ -181,16 +183,16 @@

Examples

#> Key: <id> #> id studyArm t_studyArm #> <int> <int> <fctr> -#> 1: 1 1 treatment -#> 2: 2 0 control +#> 1: 1 0 control +#> 2: 2 1 treatment #> 3: 3 1 treatment -#> 4: 4 0 control -#> 5: 5 0 control -#> 6: 6 1 treatment +#> 4: 4 1 treatment +#> 5: 5 1 treatment +#> 6: 6 0 control #> 7: 7 0 control #> 8: 8 0 control #> 9: 9 1 treatment -#> 10: 10 1 treatment +#> 10: 10 0 control
@@ -205,15 +207,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genFormula.html b/reference/genFormula.html index 9f760379..5d1fbef0 100644 --- a/reference/genFormula.html +++ b/reference/genFormula.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -106,9 +106,11 @@

Generate a linear formula

Arguments

-
coefs
+ + +
coefs

A vector that contains the values of the -coefficients. Coefficients can also be defined as character for use with +coefficients. Coefficients can also be defined as character for use with double dot notation. If length(coefs) == length(vars), then no intercept is assumed. Otherwise, an intercept is assumed.

@@ -120,9 +122,7 @@

Arguments

Value

- - -

A string that represents the desired formula

+

A string that represents the desired formula

@@ -187,15 +187,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genMarkov.html b/reference/genMarkov.html index 69556d19..2e93063b 100644 --- a/reference/genMarkov.html +++ b/reference/genMarkov.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -117,7 +117,9 @@

Generate Markov chain

Arguments

-
n
+ + +
n

number of individual chains to generate

@@ -166,16 +168,14 @@

Arguments

startProb
-

A string that contains the probability distribution of the +

A string that contains the probability distribution of the starting state, separated by a ";". Length of start probabilities must match the number of rows of the transition matrix.

Value

- - -

A data table with n rows if in wide format, or n by chainLen rows +

A data table with n rows if in wide format, or n by chainLen rows if in long format.

@@ -211,15 +211,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genMiss.html b/reference/genMiss.html index 945ee258..416f6715 100644 --- a/reference/genMiss.html +++ b/reference/genMiss.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -111,7 +111,9 @@

Generate missing data

Arguments

-
dtName
+ + +
dtName

Name of complete data set

@@ -138,9 +140,7 @@

Arguments

Value

- - -

Missing data matrix indexed by idvars (and period if relevant)

+

Missing data matrix indexed by idvars (and period if relevant)

See also

@@ -220,15 +220,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genMixFormula.html b/reference/genMixFormula.html index df0407d1..81a6ad04 100644 --- a/reference/genMixFormula.html +++ b/reference/genMixFormula.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -106,7 +106,9 @@

Generate Mixture Formula

Arguments

-
vars
+ + +
vars

Character vector/list of variable names.

@@ -123,9 +125,7 @@

Arguments

Value

- - -

The mixture formula as a string.

+

The mixture formula as a string.

@@ -152,15 +152,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genMultiFac.html b/reference/genMultiFac.html index 4a26b4d7..6d161201 100644 --- a/reference/genMultiFac.html +++ b/reference/genMultiFac.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -111,7 +111,9 @@

Generate multi-factorial data

Arguments

-
nFactors
+ + +
nFactors

Number of factors (columns) to generate.

@@ -143,9 +145,7 @@

Arguments

Value

- - -

A data.table that contains the added simulated data. Each column contains +

A data.table that contains the added simulated data. Each column contains an integer.

@@ -234,15 +234,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genNthEvent.html b/reference/genNthEvent.html index 2fd3c0cd..47823dff 100644 --- a/reference/genNthEvent.html +++ b/reference/genNthEvent.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -106,7 +106,9 @@

Generate event data using longitudinal data, and restrict output to time unt

Arguments

-
dtName
+ + +
dtName

name of existing data table

@@ -131,9 +133,7 @@

Arguments

Value

- - -

data.table that stops after "nEvents" are reached.

+

data.table that stops after "nEvents" are reached.

@@ -164,15 +164,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genObs.html b/reference/genObs.html index 71f36490..4d0d5061 100644 --- a/reference/genObs.html +++ b/reference/genObs.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Create an observed data set that includes missing data

Arguments

-
dtName
+ + +
dtName

Name of complete data set

@@ -118,9 +120,7 @@

Arguments

Value

- - -

A data table that represents observed data, including +

A data table that represents observed data, including missing data

@@ -201,15 +201,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genOrdCat.html b/reference/genOrdCat.html index 33b8f217..7138944e 100644 --- a/reference/genOrdCat.html +++ b/reference/genOrdCat.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -119,7 +119,9 @@

Generate ordinal categorical data

Arguments

-
dtName
+ + +
dtName

Name of complete data set

@@ -191,9 +193,7 @@

Arguments

Value

- - -

Original data.table with added categorical field.

+

Original data.table with added categorical field.

@@ -342,15 +342,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genSpline.html b/reference/genSpline.html index 15e6d747..7a83d8bd 100644 --- a/reference/genSpline.html +++ b/reference/genSpline.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -113,7 +113,9 @@

Generate spline curves

Arguments

-
dt
+ + +
dt

data.table that will be modified

@@ -155,9 +157,7 @@

Arguments

Value

- - -

A modified data.table with an added column named newvar.

+

A modified data.table with an added column named newvar.

@@ -209,15 +209,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genSurv.html b/reference/genSurv.html index 7aeace18..80e98425 100644 --- a/reference/genSurv.html +++ b/reference/genSurv.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -108,13 +108,16 @@

Generate survival data

eventName = "event", typeName = "type", keepEvents = FALSE, - idName = "id" + idName = "id", + envir = parent.frame() )

Arguments

-
dtName
+ + +
dtName

Name of data set

@@ -128,7 +131,7 @@

Arguments

timeName

A string to indicate the name of a combined competing risk -time-to-event outcome that reflects the minimum observed value of all +time-to-event outcome that reflects the minimum observed value of all time-to-event outcomes. Defaults to NULL, indicating that each time-to-event outcome will be included in dataset.

@@ -141,7 +144,7 @@

Arguments

eventName

The name of the new numeric/integer column representing the competing event outcomes. If censorName is specified, the integer value for -that event will be 0. Defaults to "event", but will be ignored +that event will be 0. Defaults to "event", but will be ignored if timeName is NULL.

@@ -159,12 +162,14 @@

Arguments

idName

Name of id field in existing data set.

+ +
envir
+

Optional environment, defaults to current calling environment.

+

Value

- - -

Original data table with survival time

+

Original data table with survival time

@@ -223,15 +228,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/genSynthetic.html b/reference/genSynthetic.html index 3cbeacdd..69fc0d0a 100644 --- a/reference/genSynthetic.html +++ b/reference/genSynthetic.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Generate synthetic data

Arguments

-
dtFrom
+ + +
dtFrom

Data table that contains the source data

@@ -125,9 +127,7 @@

Arguments

Value

- - -

A data table with the generated data

+

A data table with the generated data

@@ -162,15 +162,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/iccRE.html b/reference/iccRE.html index e10becbe..9e6b70dd 100644 --- a/reference/iccRE.html +++ b/reference/iccRE.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -106,7 +106,9 @@

Generate variance for random effects that produce desired intra-class coeffi

Arguments

-
ICC
+ + +
ICC

Vector of values between 0 and 1 that represent the target ICC levels

@@ -142,15 +144,13 @@

Arguments

Value

- - -

A vector of values that represents the variances of random effects +

A vector of values that represents the variances of random effects at the cluster level that correspond to the ICC vector.

References

-

Nakagawa, Shinichi, and Holger Schielzeth. "A general and simple -method for obtaining R2 from generalized linear mixed‐effects models." +

Nakagawa, Shinichi, and Holger Schielzeth. "A general and simple +method for obtaining R2 from generalized linear mixed‐effects models." Methods in ecology and evolution 4, no. 2 (2013): 133-142.

@@ -198,15 +198,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/index.html b/reference/index.html index 9e327412..06355c27 100644 --- a/reference/index.html +++ b/reference/index.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -156,6 +156,10 @@

Generate Data addCondition()

Add a single column to existing data set based on a condition

+ +

addDataDensity()

+ +

Add data from a density defined by a vector of integers

addMarkov()

@@ -172,6 +176,10 @@

Generate Data genData()

Calling function to simulate data

+ +

genDataDensity()

+ +

Generate data from a density defined by a vector of integers

genDummy()

@@ -405,15 +413,15 @@

Deprecated & Defunct -

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/logisticCoefs.html b/reference/logisticCoefs.html index 56eadf3b..9f98bfa3 100644 --- a/reference/logisticCoefs.html +++ b/reference/logisticCoefs.html @@ -1,13 +1,13 @@ -Determine intercept, treatment/exposure and covariate coefficients that can be used for binary data generation with a logit link and a set of covariates — logisticCoefs • simstudyDetermine intercept, treatment/exposure and covariate coefficients that can be used for binary data generation with a logit link and a set of covariates — logisticCoefs • simstudy - +
@@ -35,7 +35,7 @@
- +
-

This is an implementation of an iterative bisection procedure -that can be used to determine coefficient values for a target population -prevalence as well as a target risk ratio, risk difference, or AUC. These +

This is an implementation of an iterative bisection procedure +that can be used to determine coefficient values for a target population +prevalence as well as a target risk ratio, risk difference, or AUC. These coefficients can be used in a subsequent data generation process to simulate data with these desire characteristics.

@@ -122,18 +122,20 @@

Determine intercept, treatment/exposure and covariate coefficients that can

Arguments

-
defCovar
+ + +
defCovar

A definition table for the covariates in the underlying population. This tables specifies the distribution of the covariates.

coefs
-

A vector of coefficients that reflect the relationship between +

A vector of coefficients that reflect the relationship between each of the covariates and the log-odds of the outcome.

popPrev
-

The target population prevalence of the outcome. +

The target population prevalence of the outcome. A value between 0 and 1.

@@ -148,7 +150,7 @@

Arguments

auc
-

The target AUC, which must be a value between 0.5 and 1.0 . +

The target AUC, which must be a value between 0.5 and 1.0 . Defaults to NULL.

@@ -158,8 +160,8 @@

Arguments

sampleSize
-

The number of units to generate for the bisection algorithm. -The default is 1e+05. To get a reliable estimate, the value +

The number of units to generate for the bisection algorithm. +The default is 1e+05. To get a reliable estimate, the value should be no smaller than the default, though larger values can be used, though computing time will increase.

@@ -171,9 +173,7 @@

Arguments

Value

- - -

A vector of parameters including the intercept and covariate +

A vector of parameters including the intercept and covariate coefficients for the logistic model data generating process.

@@ -185,15 +185,15 @@

Details

References

-

Austin, Peter C. "The iterative bisection procedure: a useful -tool for determining parameter values in data-generating processes in -Monte Carlo simulations." BMC Medical Research Methodology 23, +

Austin, Peter C. "The iterative bisection procedure: a useful +tool for determining parameter values in data-generating processes in +Monte Carlo simulations." BMC Medical Research Methodology 23, no. 1 (2023): 1-10.

Examples

-
if (FALSE) {
+    
if (FALSE) { # \dontrun{
 d1 <- defData(varname = "x1", formula = 0, variance = 1)
 d1 <- defData(d1, varname = "b1", formula = 0.5, dist = "binary")
 
@@ -203,7 +203,7 @@ 

Examples

logisticCoefs(d1, coefs, popPrev = 0.20, rr = 1.50, trtName = "rx") logisticCoefs(d1, coefs, popPrev = 0.20, rd = 0.30, trtName = "rx") logisticCoefs(d1, coefs, popPrev = 0.20, auc = 0.80) -} +} # }
@@ -218,15 +218,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/mergeData.html b/reference/mergeData.html index 702d4092..7f33a45f 100644 --- a/reference/mergeData.html +++ b/reference/mergeData.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Merge two data tables

Arguments

-
dt1
+ + +
dt1

Name of first data.table

@@ -118,9 +120,7 @@

Arguments

Value

- - -

A new data table that merges dt2 with dt1

+

A new data table that merges dt2 with dt1

@@ -175,15 +175,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/negbinomGetSizeProb.html b/reference/negbinomGetSizeProb.html index ba8ec636..7b1b5189 100644 --- a/reference/negbinomGetSizeProb.html +++ b/reference/negbinomGetSizeProb.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Convert negative binomial mean and dispersion parameters to size and prob pa

Arguments

-
mean
+ + +
mean

The mean of a gamma distribution

@@ -114,9 +116,7 @@

Arguments

Value

- - -

A list that includes the size and prob parameters of the neg binom +

A list that includes the size and prob parameters of the neg binom distribution

@@ -158,15 +158,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/simstudy-deprecated.html b/reference/simstudy-deprecated.html index ce2dcdf9..55f212f5 100644 --- a/reference/simstudy-deprecated.html +++ b/reference/simstudy-deprecated.html @@ -4,7 +4,7 @@ - +
@@ -32,7 +32,7 @@
- +
@@ -103,7 +103,7 @@

Deprecated functions in simstudy

Details

- +
  • genCorOrdCat: This function is deprecated, and will be removed in the future. Use genOrdCat with asFactor = FALSE instead.

  • catProbs: This function is deprecated, and will be removed in the future. @@ -122,15 +122,15 @@

    Details

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/simstudy-package.html b/reference/simstudy-package.html index 1e34e188..4a09aa91 100644 --- a/reference/simstudy-package.html +++ b/reference/simstudy-package.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -125,15 +125,15 @@

Author

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/survGetParams.html b/reference/survGetParams.html index 892c5c87..8f76b67a 100644 --- a/reference/survGetParams.html +++ b/reference/survGetParams.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,16 +104,16 @@

Get survival curve parameters

Arguments

-
points
-

A list of two-element vectors specifying the desired time and + + +

points
+

A list of two-element vectors specifying the desired time and probability pairs that define the desired survival curve

Value

- - -

A vector of parameters that define the survival curve optimized for +

A vector of parameters that define the survival curve optimized for the target points. The first element of the vector represents the "f" parameter and the second element represents the "shape" parameter.

@@ -137,15 +137,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/survParamPlot.html b/reference/survParamPlot.html index 43084933..7e61d179 100644 --- a/reference/survParamPlot.html +++ b/reference/survParamPlot.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Plot survival curves

Arguments

-
formula
+ + +
formula

This is the "formula" parameter of the Weibull-based survival curve that can be used to define the scale of the distribution.

@@ -114,31 +116,29 @@

Arguments

points
-

An optional list of two-element vectors specifying the desired +

An optional list of two-element vectors specifying the desired time and probability pairs that define the desired survival curve. If no list is specified then the plot will not include any points.

n
-

The number of points along the curve that will be used to +

The number of points along the curve that will be used to define the line. Defaults to 100.

scale
-

An optional scale parameter that defaults to 1. If the value is +

An optional scale parameter that defaults to 1. If the value is 1, the scale of the distribution is determined entirely by the argument "f".

limits
-

A vector of length 2 that specifies x-axis limits for the plot. +

A vector of length 2 that specifies x-axis limits for the plot. The default is NULL, in which case no limits are imposed.

Value

- - -

A ggplot of the survival curve defined by the specified parameters. +

A ggplot of the survival curve defined by the specified parameters. If the argument points is specified, the plot will include them

@@ -166,15 +166,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/trimData.html b/reference/trimData.html index d2a99065..6fcbc16a 100644 --- a/reference/trimData.html +++ b/reference/trimData.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Trim longitudinal data file once an event has occurred

Arguments

-
dtOld
+ + +
dtOld

name of data table to be trimmed

@@ -122,9 +124,7 @@

Arguments

Value

- - -

an updated data.table removes all rows following the first event for each +

an updated data.table removes all rows following the first event for each individual

@@ -180,15 +180,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/trtAssign.html b/reference/trtAssign.html index d75b35dc..8939b54c 100644 --- a/reference/trtAssign.html +++ b/reference/trtAssign.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -111,7 +111,9 @@

Assign treatment

Arguments

-
dtName
+ + +
dtName

data table

@@ -139,9 +141,7 @@

Arguments

Value

- - -

An integer (group) ranging from 1 to length of the +

An integer (group) ranging from 1 to length of the probability vector

@@ -301,15 +301,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/trtObserve.html b/reference/trtObserve.html index a75b9de7..d375c90b 100644 --- a/reference/trtObserve.html +++ b/reference/trtObserve.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Observed exposure or treatment

Arguments

-
dt
+ + +
dt

data table

@@ -124,9 +126,7 @@

Arguments

Value

- - -

An integer (group) ranging from 1 to length of the probability vector

+

An integer (group) ranging from 1 to length of the probability vector

See also

@@ -238,15 +238,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/trtStepWedge.html b/reference/trtStepWedge.html index 5e058b50..2ed01046 100644 --- a/reference/trtStepWedge.html +++ b/reference/trtStepWedge.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -114,7 +114,9 @@

Assign treatment for stepped-wedge design

Arguments

-
dtName
+ + +
dtName

data table

@@ -154,9 +156,7 @@

Arguments

Value

- - -

A data.table with the added treatment assignment

+

A data.table with the added treatment assignment

See also

@@ -229,15 +229,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/updateDef.html b/reference/updateDef.html index 2f18479e..9798534a 100644 --- a/reference/updateDef.html +++ b/reference/updateDef.html @@ -5,7 +5,7 @@ - +
@@ -33,7 +33,7 @@
- +
@@ -116,7 +116,9 @@

Update definition table

Arguments

-
dtDefs
+ + +
dtDefs

Definition table that will be modified

@@ -147,9 +149,7 @@

Arguments

Value

- - -

The updated data definition table.

+

The updated data definition table.

@@ -226,15 +226,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/updateDefAdd.html b/reference/updateDefAdd.html index 88f21bfb..4e7e29db 100644 --- a/reference/updateDefAdd.html +++ b/reference/updateDefAdd.html @@ -5,7 +5,7 @@ - +
@@ -33,7 +33,7 @@
- +
@@ -116,7 +116,9 @@

Update definition table

Arguments

-
dtDefs
+ + +
dtDefs

Definition table that will be modified

@@ -146,9 +148,7 @@

Arguments

Value

- - -

A string that represents the desired formula

+

A string that represents the desired formula

@@ -217,15 +217,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/viewBasis.html b/reference/viewBasis.html index 9f7e3a19..89448475 100644 --- a/reference/viewBasis.html +++ b/reference/viewBasis.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Plot basis spline functions

Arguments

-
knots
+ + +
knots

A vector of values between 0 and 1, specifying cut-points for splines

@@ -114,9 +116,7 @@

Arguments

Value

- - -

A ggplot object that contains a plot of the basis functions. In total, there +

A ggplot object that contains a plot of the basis functions. In total, there will be length(knots) + degree + 1 functions plotted.

@@ -147,15 +147,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/reference/viewSplines.html b/reference/viewSplines.html index f0f7a542..c9c7b8e7 100644 --- a/reference/viewSplines.html +++ b/reference/viewSplines.html @@ -3,7 +3,7 @@ - +
@@ -31,7 +31,7 @@
- +
@@ -104,7 +104,9 @@

Plot spline curves

Arguments

-
knots
+ + +
knots

A vector of values between 0 and 1, specifying cut-points for splines

@@ -121,9 +123,7 @@

Arguments

Value

- - -

A ggplot object that contains a plot of the spline curves. The number of +

A ggplot object that contains a plot of the spline curves. The number of spline curves in the plot will equal the number of columns in the matrix (or it will equal 1 if theta is a vector).

@@ -161,15 +161,15 @@

Examples

-

Site built with pkgdown 2.0.9.9000.

+

Site built with pkgdown 2.1.0.9000.

- - + + diff --git a/sitemap.xml b/sitemap.xml index a7b091fe..51ef4443 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1,264 +1,92 @@ - - - - /404.html - - - /CODE_OF_CONDUCT.html - - - /CONTRIBUTING.html - - - /LICENSE-text.html - - - /SUPPORT.html - - - /articles/clustered.html - - - /articles/corelationmat.html - - - /articles/correlated.html - - - /articles/customdist.html - - - /articles/double_dot_extension.html - - - /articles/index.html - - - /articles/logisticCoefs.html - - - /articles/longitudinal.html - - - /articles/missing.html - - - /articles/ordinal.html - - - /articles/simstudy.html - - - /articles/spline.html - - - /articles/survival.html - - - /articles/treat_and_exposure.html - - - /authors.html - - - /index.html - - - /news/index.html - - - /reference/addColumns.html - - - /reference/addCompRisk.html - - - /reference/addCondition.html - - - /reference/addCorData.html - - - /reference/addCorFlex.html - - - /reference/addCorGen.html - - - /reference/addMarkov.html - - - /reference/addMultiFac.html - - - /reference/addPeriods.html - - - /reference/addSynthetic.html - - - /reference/betaGetShapes.html - - - /reference/blockDecayMat.html - - - /reference/blockExchangeMat.html - - - /reference/catProbs.html - - - /reference/defCondition.html - - - /reference/defData.html - - - /reference/defDataAdd.html - - - /reference/defMiss.html - - - /reference/defRead.html - - - /reference/defReadAdd.html - - - /reference/defReadCond.html - - - /reference/defRepeat.html - - - /reference/defRepeatAdd.html - - - /reference/defSurv.html - - - /reference/delColumns.html - - - /reference/distributions.html - - - /reference/gammaGetShapeRate.html - - - /reference/genCatFormula.html - - - /reference/genCluster.html - - - /reference/genCorData.html - - - /reference/genCorFlex.html - - - /reference/genCorGen.html - - - /reference/genCorMat.html - - - /reference/genCorOrdCat.html - - - /reference/genData.html - - - /reference/genDummy.html - - - /reference/genFactor.html - - - /reference/genFormula.html - - - /reference/genMarkov.html - - - /reference/genMiss.html - - - /reference/genMixFormula.html - - - /reference/genMultiFac.html - - - /reference/genNthEvent.html - - - /reference/genObs.html - - - /reference/genOrdCat.html - - - /reference/genSpline.html - - - /reference/genSurv.html - - - /reference/genSynthetic.html - - - /reference/iccRE.html - - - /reference/index.html - - - /reference/logisticCoefs.html - - - /reference/mergeData.html - - - /reference/negbinomGetSizeProb.html - - - /reference/simstudy-deprecated.html - - - /reference/simstudy-package.html - - - /reference/survGetParams.html - - - /reference/survParamPlot.html - - - /reference/trimData.html - - - /reference/trtAssign.html - - - /reference/trtObserve.html - - - /reference/trtStepWedge.html - - - /reference/updateDef.html - - - /reference/updateDefAdd.html - - - /reference/viewBasis.html - - - /reference/viewSplines.html - + +/404.html +/CODE_OF_CONDUCT.html +/CONTRIBUTING.html +/LICENSE-text.html +/SUPPORT.html +/articles/clustered.html +/articles/corelationmat.html +/articles/correlated.html +/articles/customdist.html +/articles/double_dot_extension.html +/articles/index.html +/articles/logisticCoefs.html +/articles/longitudinal.html +/articles/missing.html +/articles/ordinal.html +/articles/simstudy.html +/articles/spline.html +/articles/survival.html +/articles/treat_and_exposure.html +/authors.html +/index.html +/news/index.html +/reference/addColumns.html +/reference/addCompRisk.html +/reference/addCondition.html +/reference/addCorData.html +/reference/addCorFlex.html +/reference/addCorGen.html +/reference/addDataDensity.html +/reference/addMarkov.html +/reference/addMultiFac.html +/reference/addPeriods.html +/reference/addSynthetic.html +/reference/betaGetShapes.html +/reference/blockDecayMat.html +/reference/blockExchangeMat.html +/reference/catProbs.html +/reference/defCondition.html +/reference/defData.html +/reference/defDataAdd.html +/reference/defMiss.html +/reference/defRead.html +/reference/defReadAdd.html +/reference/defReadCond.html +/reference/defRepeat.html +/reference/defRepeatAdd.html +/reference/defSurv.html +/reference/delColumns.html +/reference/distributions.html +/reference/gammaGetShapeRate.html +/reference/genCatFormula.html +/reference/genCluster.html +/reference/genCorData.html +/reference/genCorFlex.html +/reference/genCorGen.html +/reference/genCorMat.html +/reference/genCorOrdCat.html +/reference/genData.html +/reference/genDataDensity.html +/reference/genDummy.html +/reference/genFactor.html +/reference/genFormula.html +/reference/genMarkov.html +/reference/genMiss.html +/reference/genMixFormula.html +/reference/genMultiFac.html +/reference/genNthEvent.html +/reference/genObs.html +/reference/genOrdCat.html +/reference/genSpline.html +/reference/genSurv.html +/reference/genSynthetic.html +/reference/iccRE.html +/reference/index.html +/reference/logisticCoefs.html +/reference/mergeData.html +/reference/negbinomGetSizeProb.html +/reference/simstudy-deprecated.html +/reference/simstudy-package.html +/reference/survGetParams.html +/reference/survParamPlot.html +/reference/trimData.html +/reference/trtAssign.html +/reference/trtObserve.html +/reference/trtStepWedge.html +/reference/updateDef.html +/reference/updateDefAdd.html +/reference/viewBasis.html +/reference/viewSplines.html +