Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wonky plot from check_model() on a glmmTMB example #654

Open
Tracked by #698 ...
bbolker opened this issue Nov 21, 2023 · 33 comments · Fixed by #680
Open
Tracked by #698 ...

wonky plot from check_model() on a glmmTMB example #654

bbolker opened this issue Nov 21, 2023 · 33 comments · Fixed by #680
Assignees
Labels
3 investigators ❔❓ Need to look further into this issue Bug 🐛 Something isn't working

Comments

@bbolker
Copy link

bbolker commented Nov 21, 2023

This is from an nbinom1 model - the "overdispersion" and "normality of residuals" plots both look odd ...

Screenshot from 2023-11-21 10-05-50

library(glmmTMB)
library(dplyr) ## for mutate_at, %>%
#Build example data
x <- c("A", "B", "C", "D")
(time <- rep(x, each=20, times=3)) #time factor
y <- c("exposed", "ref1", "ref2")
(lake <- rep (y, each=80))  #lake factor
set.seed(123)
min <- runif(n=240, min=4.5, max=5.5) #mins used in model offset
set.seed(123)
(count <- rnbinom(n=240,mu=10,size=100)) #randomly generated negative binomial data

#make data frame
dat <- as.data.frame(cbind(time, lake, min, count)) 
dat <- dat %>% 
   mutate_at(c('min', 'count'), as.numeric)

#remove one combination of factors to make example rank deficient (all observations from time A and lake ref1)
dat2 <- filter(dat, time!="A" | lake !="ref1")

model <-glmmTMB(count~time*lake, family=nbinom1,
                      control = glmmTMBControl(rank_check = "adjust"),
                      offset=log(min), data=dat2)

library(performance)
check_model(model)
@strengejacke strengejacke added Bug 🐛 Something isn't working 3 investigators ❔❓ Need to look further into this issue labels Nov 22, 2023
@strengejacke
Copy link
Member

A quick guess is that inappropriate residuals are calculated. This is the code to detect overdispersion for this specific model:

    d <- data.frame(Predicted = stats::predict(model, type = "response"))
    d$Residuals <- insight::get_response(model) - as.vector(d$Predicted)
    d$Res2 <- d$Residuals^2
    d$V <- d$Predicted * (1 + d$Predicted / insight::get_sigma(model))
    d$StdRes <- insight::get_residuals(model, type = "pearson")

And the qq-plot for glmmTMB simply uses stats::residuals(model).

If you don't have a suggestion for calculating the most appropriate residuals, the best solution is probably to go with simulated residuals and diagnostics based on DHARMa (#643)

@bbolker
Copy link
Author

bbolker commented Feb 5, 2024

OK, the Q-Q plot should definitely be using stats::residuals(model, type = "pearson"). Or residuals(model, type = "deviance") if that's what the y-axis label says ... (we can't yet get standardized deviance residuals, because we still haven't got machinery to extract the hat matrix ...)

I don't know enough about how the columns in the overdispersion machinery are being used downstream or why you need to compute the variance yourself - it seems like it should be possibly to get it more reliably with built-in glmmTMB machinery (but I haven't dug around there ...)

@strengejacke

This comment was marked as outdated.

@strengejacke

This comment was marked as outdated.

@strengejacke
Copy link
Member

I don't know enough about how the columns in the overdispersion machinery are being used downstream or why you need to compute the variance yourself - it seems like it should be possibly to get it more reliably with built-in glmmTMB machinery (but I haven't dug around there ...)

@bwiernik I think you wrote most/all of the code for the overdisperson plot/check.

strengejacke added a commit to easystats/see that referenced this issue Feb 5, 2024
@bwiernik
Copy link
Contributor

bwiernik commented Feb 5, 2024

Not sure what the reason was for this decision (i.e. using halfnorm) - I can remember we had good reasons, but can't remember details. @easystats/core-team any ideas? @bbolker what would you suggest, the new solution presented below? (as long as we don't rely on DHARMa)

Base R switched to using half-normnal for binomial and count models. I would suggest we keep using it for glmmTMB for the relevant families?

@bwiernik
Copy link
Contributor

bwiernik commented Feb 5, 2024

I don't know enough about how the columns in the overdispersion machinery are being used downstream or why you need to compute the variance yourself - it seems like it should be possibly to get it more reliably with built-in glmmTMB machinery (but I haven't dug around there ...)

@bwiernik I think you wrote most/all of the code for the overdisperson plot/check.

I think I just wrote one set of code there and didn't check if anything was already available for glmmTMB models. It would be good to use existing machinery there if possible.

@strengejacke
Copy link
Member

I would suggest we keep using it for glmmTMB for the relevant families?

That would be poisson and binomial, but not nbinom?

@strengejacke
Copy link
Member

It would be good to use existing machinery there if possible.

Ok - but I'm not sure how to revise the relevant code section. Happy if someone could make a proposal.

@bwiernik
Copy link
Contributor

bwiernik commented Feb 5, 2024

I would suggest we keep using it for glmmTMB for the relevant families?

That would be poisson and binomial, but not nbinom?

Base R uses a half-normal with absolute standardized deviance residuals for any family of model fit with glm(). We should probably be a little smarter than that and limit it to non-Gaussian models.

If we can't get standardized residuals, then we probably need to adjust the reference distribution to not be a standard normal/half-normal.

@bwiernik
Copy link
Contributor

bwiernik commented Feb 5, 2024

Let me dig into these plots a bit and see what the most justifiable default should be.

@strengejacke
Copy link
Member

or why you need to compute the variance yourself - it seems like it should be possibly to get it more reliably with built-in glmmTMB machinery (but I haven't dug around there ...)

@bbolker we compute the per-observation variance - not sure how to do this with family$variance()?

@bbolker
Copy link
Author

bbolker commented Feb 5, 2024

Hmm. In principle it would be nice if it were sigma(model)^2*family(model)$variance(predict(model, type = "response")) (at least for models without ZI or dispersion components ...), but sigma.glmmTMB is wonky - returns different values, not necessarily equal to the sqrt of the scaling factor of the $variance() function, for different families (see ?sigma.glmmTMB): maybe there needs to be another function that will get us what we want (or a flag to sigma.glmmTMB ?)

strengejacke added a commit to easystats/see that referenced this issue Feb 5, 2024
@bwiernik
Copy link
Contributor

bwiernik commented Feb 5, 2024

Theory on half-normal plot for deviance residuals https://www.jstatsoft.org/article/view/v081i10

@strengejacke
Copy link
Member

This looks better for glmmTMB now, but overdispersion plot for glm.nb looks strange to me. Maybe it's correct, though? See #680

library(glmmTMB)
library(performance)
library(MASS)
library(dplyr) ## for mutate_at, %>%
#> 
#> Attaching package: 'dplyr'
#> The following object is masked from 'package:MASS':
#> 
#>     select
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union

x <- c("A", "B", "C", "D")
time <- rep(x, each = 20, times = 3) # time factor
y <- c("exposed", "ref1", "ref2")
lake <- rep(y, each = 80) # lake factor
set.seed(123)
min <- runif(n = 240, min = 4.5, max = 5.5) # mins used in model offset
set.seed(123)
count <- rnbinom(n = 240, mu = 10, size = 100) # randomly generated negative binomial data

# make data frame
dat <- as.data.frame(cbind(time, lake, min, count))
dat <- dat %>%
  mutate_at(c("min", "count"), as.numeric)

# remove one combination of factors to make example rank deficient (all observations from time A and lake ref1)
dat2 <- filter(dat, time != "A" | lake != "ref1")

model <- glmmTMB(count ~ time * lake,
  family = nbinom1,
  control = glmmTMBControl(rank_check = "adjust"),
  offset = log(min), data = dat2
)
#> dropping columns from rank-deficient conditional model: timeD:lakeref1
check_model(model)
#> `check_outliers()` does not yet support models of class `glmmTMB`.

m2 <- glm.nb(count ~ time * lake + offset(log(min)), data = dat2)
check_model(m2)

Created on 2024-02-05 with reprex v2.1.0

@bbolker
Copy link
Author

bbolker commented Feb 6, 2024

We still need to be careful. m3 <- update(model, family = nbinom2); plot(check_overdispersion(m3)) is also busted (precisely because sigma() has the inverse meaning for nbinom2 ... (haven't dug in enough to see what's up with glm.nb)

@strengejacke
Copy link
Member

precisely because sigma() has the inverse meaning for nbinom2

We can add a different handling for nbinom2, but what would be the solution in this case? 1/sigma?

@strengejacke
Copy link
Member

If the new code is correct, this would be the result.

First, the new code:

  if (faminfo$is_negbin && !faminfo$is_zero_inflated) {
    if (inherits(model, "glmmTMB")) {
      d <- data.frame(Predicted = stats::predict(model, type = "response"))
      d$Residuals <- insight::get_residuals(model, type = "pearson")
      d$Res2 <- d$Residuals^2
      d$StdRes <- insight::get_residuals(model, type = "pearson")
      if (faminfo$family == "nbinom1") {
        # for nbinom1, we can use "sigma()"
        d$V <- insight::get_sigma(model)^2 * stats::family(model)$variance(d$Predicted)
      } else {
        # for nbinom2, "sigma()" has "inverse meaning" (see #654)
        d$V <- (1 / insight::get_sigma(model)^2) * stats::family(model)$variance(d$Predicted)
      }
    } else {
      ## FIXME: this is not correct for glm.nb models?
      d <- data.frame(Predicted = stats::predict(model, type = "response"))
      d$Residuals <- insight::get_response(model) - as.vector(d$Predicted)
      d$Res2 <- d$Residuals^2
      d$V <- d$Predicted * (1 + d$Predicted / insight::get_sigma(model))
      d$StdRes <- insight::get_residuals(model, type = "pearson")
    }
  }

Result:

model <- glmmTMB(count ~ time * lake,
  family = nbinom1,
  control = glmmTMBControl(rank_check = "adjust"),
  offset = log(min), data = dat2
)
m3 <- update(model, family = nbinom2)

p1 <- plot(check_overdispersion(model))
p2 <- plot(check_overdispersion(m3))

p1 + p2

@bbolker
Copy link
Author

bbolker commented Feb 6, 2024

Looks good (although obviously glm.nb still needs to be fixed ... moving forward I'd like to try to find a way to fix the inconsistency of sigma() definitions (see also glmmTMB/glmmTMB#951). Maybe we could provide some kind of variance function that gave the predicted variance as a function of mu directly (rather than the predicted scaled variance ...)

@strengejacke
Copy link
Member

While we're talking about variance, documentation etc.:

This is the variance-function from beta-families:

glmmTMB::beta_family()$variance
#> function (mu) 
#> {
#>     mu * (1 - mu)
#> }
#> <bytecode: 0x000001dc83b3ae58>
#> <environment: 0x000001dc83b3aa30>

The docs, however, say:
image

Could you clarify which one is correct? I used the one from the docs in insight::get_variance() instead of family$variance(), and if the latter is correct (not the docs), then this could resolve some issues (https://github.com/easystats/insight/issues?q=is%3Aissue+is%3Aopen+label%3Aget_variance)

@bwiernik
Copy link
Contributor

bwiernik commented Feb 6, 2024

Looks good (although obviously glm.nb still needs to be fixed ... moving forward I'd like to try to find a way to fix the inconsistency of sigma() definitions (see also glmmTMB/glmmTMB#951). Maybe we could provide some kind of variance function that gave the predicted variance as a function of mu directly (rather than the predicted scaled variance ...)

A "variance" option in predict() would be super useful

@bbolker
Copy link
Author

bbolker commented Feb 6, 2024

That's the thing: the $variance() function as defined/implemented in base R does not return the variance for a given mean, it returns the scaled variance (even though ?family says it is "the variance as a function of the mean"). For example, gaussian$variance is function (mu) rep.int(1, length(mu)). We probably screwed up when we allowed sigma.glmmTMB to return things that are not scale parameters; it should be the case that

sigma(model)^2*family(model)$variance(predict(model, type = "response"))

works generally ...

It could be worth making breaking changes to sigma.glmmTMB to fix this ... although I guess adding a type = "variance" objection to predict would be the most straightforward/least-breaking solution ...

@strengejacke
Copy link
Member

I think this is where my lack of statistical expertise prevents me from understanding how to proceed ;-)

The initial code base in insight::get_variance() was drafted by you, and I tried to enhance for further model families. Based on Nakagawa et al. 2017, you commented:

in general want log(1+var(x)/mu^2)

https://github.com/easystats/insight/blob/5f886703cf2303d8f0d0c44b89b29dd092046360/R/compute_variances.R#L603

This is, e.g., what is done for the beta-family, based on the docs (see my screenshot above):

# Get distributional variance for beta-family
# ----------------------------------------------
.variance_family_beta <- function(x, mu, phi) {
  if (inherits(x, "MixMod")) {
    stats::family(x)$variance(mu)
  } else {
    mu * (1 - mu) / (1 + phi)
  }
}

This sometimes leads to weird results when computing R2 or ICC for mixed models. For most families, performance::r2_nakagawa() is in line with results from other packages. However, "new" families, which are not yet supported by other packages, sometime produce weird results for performance::r2_nakagawa(). Not sure if the distributional variance (calculated in insight:::..compute_variance_distribution() and insight:::.variance_distributional()) is accurate for all model families.

Is there some way to "validate" the results against something? E.g. against non-mixed or Bayesian models, or some simulated data where the R2 is known? (not sure how to simulate such data, though)

@strengejacke
Copy link
Member

Initial issue should be fixed, but re-open for the later discussed points here.

@strengejacke
Copy link
Member

Q-Q plot now based on DHARMa (see #643), but still need to think about overdispersion plots.

library(glmmTMB)
library(performance)
library(datawizard)
# Build example data
x <- c("A", "B", "C", "D")
time <- rep(x, each = 20, times = 3) # time factor
y <- c("exposed", "ref1", "ref2")
lake <- rep(y, each = 80) # lake factor
set.seed(123)
min <- runif(n = 240, min = 4.5, max = 5.5) # mins used in model offset
set.seed(123)
count <- rnbinom(n = 240, mu = 10, size = 100) # randomly generated negative binomial data

# make data frame
dat <- as.data.frame(cbind(time, lake, min, count))
dat <- dat |>
  data_modify(.at = c("min", "count"), .modify = as.numeric)

# remove one combination of factors to make example rank deficient (all observations from time A and lake ref1)
dat2 <- data_filter(dat, time != "A" | lake != "ref1")

model <- glmmTMB(count ~ time * lake,
  family = nbinom1,
  control = glmmTMBControl(rank_check = "adjust"),
  offset = log(min), data = dat2
)
#> dropping columns from rank-deficient conditional model: timeD:lakeref1

check_model(model)
#> `check_outliers()` does not yet support models of class `glmmTMB`.

Created on 2024-03-16 with reprex v2.1.0

@strengejacke
Copy link
Member

strengejacke commented Mar 17, 2024

@bwiernik Based on my comments here: #643 (comment)

the question is whether we need to do anything regarding the code of the overdispersion plot? The current code relies on residuals / Pearson residuals. To check for over-/underdispersion in more complex models, we now use simulated residuals based on the DHARMa package. Can these residuals possibly be used for the code to create overdispersion plots?

A draft to play with is .new_diag_overdispersion:

.new_diag_overdispersion <- function(model, ...) {

I'm not sure if this code really works well? I'm not fully understanding the implementation in .diag_overdispersion:

.diag_overdispersion <- function(model, ...) {

and how this "translated" into a function using simulated residuals?

@strengejacke
Copy link
Member

Or is the major concern still the variance function and/or sigma?

@bwiernik
Copy link
Contributor

I think we should be able to use the dharma residuals. Let me take a look

@sj-perry
Copy link

I've found a similar issue with the QQ plots from models fit with lme4. I didn't want to open another issue, because it seems highly related to this, but let me know if it should be a separate issue.

library(performance)
library(lme4)

data("sleepstudy")

model <- lmer(Reaction ~ Days + (1+Days|Subject),
              data = sleepstudy)

check_model(model, check = "qq")

check_model(model, check = "normality")

qqnorm(resid(model))
qqline(resid(model))

Now, the QQ plot created by check_model(model, check = "qq) looks like this:

image

But that doesn't align with the output of the "normality" check:

image

Or with the base R QQ plot that I made:

image

I've installed the newest version of performance, but this issue persisted. Here's my session information:

R version 4.4.0 (2024-04-24 ucrt)
Platform: x86_64-w64-mingw32/x64
Running under: Windows 10 x64 (build 19045)

Matrix products: default


locale:
[1] LC_COLLATE=English_Canada.utf8  LC_CTYPE=English_Canada.utf8   
[3] LC_MONETARY=English_Canada.utf8 LC_NUMERIC=C                   
[5] LC_TIME=English_Canada.utf8    

time zone: Europe/Berlin
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] fabricatr_1.0.2        lme4_1.1-35.3          Matrix_1.7-0          
[4] ggthemes_5.1.0         ggplot2_3.5.1          marginaleffects_0.21.0
[7] tidyr_1.3.1            performance_0.12.4

@mattansb
Copy link
Member

Does installing {qqplotr} solve the issue?

@mattansb
Copy link
Member

This is what I'm getting now from @bbolker's example:

library(glmmTMB)
library(dplyr) ## for mutate_at, %>%
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union
#Build example data
x <- c("A", "B", "C", "D")
time <- rep(x, each=20, times=3) #time factor
y <- c("exposed", "ref1", "ref2")
lake <- rep (y, each=80)  #lake factor
set.seed(123)
min <- runif(n=240, min=4.5, max=5.5) #mins used in model offset
set.seed(123)
count <- rnbinom(n=240,mu=10,size=100) #randomly generated negative binomial data

#make data frame
dat <- as.data.frame(cbind(time, lake, min, count)) 
dat <- dat %>% 
   mutate_at(c('min', 'count'), as.numeric)

#remove one combination of factors to make example rank deficient (all observations from time A and lake ref1)
dat2 <- filter(dat, time!="A" | lake !="ref1")

model <-glmmTMB(count~time*lake, family=nbinom1,
                      control = glmmTMBControl(rank_check = "adjust"),
                      offset=log(min), data=dat2)
#> dropping columns from rank-deficient conditional model: timeD:lakeref1

library(performance)
check_model(model)
#> `check_outliers()` does not yet support models of class `glmmTMB`.

Created on 2024-12-15 with reprex v2.1.1

Session info
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#>  setting  value
#>  version  R version 4.4.1 (2024-06-14 ucrt)
#>  os       Windows 11 x64 (build 22631)
#>  system   x86_64, mingw32
#>  ui       RTerm
#>  language (EN)
#>  collate  English_Israel.utf8
#>  ctype    English_Israel.utf8
#>  tz       Asia/Jerusalem
#>  date     2024-12-15
#>  pandoc   3.2 @ c:\\Users\\user\\AppData\\Local\\Programs\\Positron\\resources\\app\\quarto\\bin\\tools/ (via rmarkdown)
#> 
#> ─ Packages ───────────────────────────────────────────────────────────────────
#>  ! package           * version    date (UTC) lib source
#>    bayestestR          0.15.0     2024-10-17 [1] CRAN (R 4.4.1)
#>    bitops              1.0-9      2024-10-03 [1] CRAN (R 4.4.1)
#>    boot                1.3-30     2024-02-26 [2] CRAN (R 4.4.1)
#>    caTools             1.18.3     2024-09-04 [1] CRAN (R 4.4.1)
#>    cli                 3.6.3      2024-06-21 [1] CRAN (R 4.4.1)
#>    coda                0.19-4.1   2024-01-31 [1] CRAN (R 4.4.1)
#>    codetools           0.2-20     2024-03-31 [2] CRAN (R 4.4.1)
#>    colorspace          2.1-1      2024-07-26 [1] CRAN (R 4.4.1)
#>    curl                5.2.3      2024-09-20 [1] CRAN (R 4.4.1)
#>    datawizard          0.13.0     2024-10-05 [1] CRAN (R 4.4.1)
#>    DEoptimR            1.1-3      2023-10-07 [1] CRAN (R 4.4.0)
#>    DHARMa              0.4.7      2024-10-18 [1] CRAN (R 4.4.1)
#>    digest              0.6.37     2024-08-19 [1] CRAN (R 4.4.1)
#>    doParallel          1.0.17     2022-02-07 [1] CRAN (R 4.4.1)
#>    dplyr             * 1.1.4      2023-11-17 [1] CRAN (R 4.4.1)
#>    emmeans             1.10.5     2024-10-14 [1] CRAN (R 4.4.1)
#>    estimability        1.5.1      2024-05-12 [1] CRAN (R 4.4.1)
#>    evaluate            1.0.1      2024-10-10 [1] CRAN (R 4.4.1)
#>    fansi               1.0.6      2023-12-08 [1] CRAN (R 4.4.1)
#>    farver              2.1.2      2024-05-13 [1] CRAN (R 4.4.1)
#>    fastmap             1.2.0      2024-05-15 [1] CRAN (R 4.4.1)
#>    foreach             1.5.2      2022-02-02 [1] CRAN (R 4.4.1)
#>    fs                  1.6.5      2024-10-30 [1] CRAN (R 4.4.1)
#>    generics            0.1.3      2022-07-05 [1] CRAN (R 4.4.1)
#>    ggplot2             3.5.1      2024-04-23 [1] CRAN (R 4.4.1)
#>    glmmTMB           * 1.1.10     2024-09-26 [1] CRAN (R 4.4.1)
#>    glue                1.8.0      2024-09-30 [1] CRAN (R 4.4.1)
#>    gtable              0.3.6      2024-10-25 [1] CRAN (R 4.4.1)
#>    highr               0.11       2024-05-26 [1] CRAN (R 4.4.1)
#>    htmltools           0.5.8.1    2024-04-04 [1] CRAN (R 4.4.1)
#>    insight             1.0.0.2    2024-12-12 [1] local
#>    iterators           1.0.14     2022-02-05 [1] CRAN (R 4.4.1)
#>    knitr               1.48       2024-07-07 [1] CRAN (R 4.4.1)
#>    labeling            0.4.3      2023-08-29 [1] CRAN (R 4.4.0)
#>    lattice             0.22-6     2024-03-20 [2] CRAN (R 4.4.1)
#>    lifecycle           1.0.4      2023-11-07 [1] CRAN (R 4.4.1)
#>    lme4                1.1-35.5   2024-07-03 [1] CRAN (R 4.4.1)
#>    magrittr            2.0.3      2022-03-30 [1] CRAN (R 4.4.1)
#>    MASS                7.3-60.2   2024-04-26 [2] CRAN (R 4.4.1)
#>    Matrix              1.7-0      2024-04-26 [2] CRAN (R 4.4.1)
#>    mgcv                1.9-1      2023-12-21 [2] CRAN (R 4.4.1)
#>    minqa               1.2.8      2024-08-17 [1] CRAN (R 4.4.1)
#>    multcomp            1.4-26     2024-07-18 [1] CRAN (R 4.4.1)
#>    munsell             0.5.1      2024-04-01 [1] CRAN (R 4.4.1)
#>    mvtnorm             1.3-1      2024-09-03 [1] CRAN (R 4.4.1)
#>    nlme                3.1-164    2023-11-27 [2] CRAN (R 4.4.1)
#>    nloptr              2.1.1      2024-06-25 [1] CRAN (R 4.4.1)
#>    numDeriv            2016.8-1.1 2019-06-06 [1] CRAN (R 4.4.0)
#>    opdisDownsampling   1.0.1      2024-04-15 [1] CRAN (R 4.4.1)
#>    patchwork           1.3.0      2024-09-16 [1] CRAN (R 4.4.1)
#>    pbmcapply           1.5.1      2022-04-28 [1] CRAN (R 4.4.0)
#>    performance       * 0.12.4     2024-10-18 [1] CRAN (R 4.4.1)
#>    pillar              1.9.0      2023-03-22 [1] CRAN (R 4.4.1)
#>    pkgconfig           2.0.3      2019-09-22 [1] CRAN (R 4.4.1)
#>    pracma              2.4.4      2023-11-10 [1] CRAN (R 4.4.1)
#>    qqconf              1.3.2      2023-04-14 [1] CRAN (R 4.4.1)
#>    qqplotr             0.0.6      2023-01-25 [1] CRAN (R 4.4.1)
#>    R6                  2.5.1      2021-08-19 [1] CRAN (R 4.4.1)
#>    rbibutils           2.3        2024-10-04 [1] CRAN (R 4.4.1)
#>    Rcpp                1.0.13     2024-07-17 [1] CRAN (R 4.4.1)
#>    Rdpack              2.6.1      2024-08-06 [1] CRAN (R 4.4.1)
#>    reformulas          0.4.0      2024-11-03 [1] CRAN (R 4.4.1)
#>    reprex              2.1.1      2024-07-06 [1] CRAN (R 4.4.1)
#>    rlang               1.1.4      2024-06-04 [1] CRAN (R 4.4.1)
#>    rmarkdown           2.28       2024-08-17 [1] CRAN (R 4.4.1)
#>    robustbase          0.99-4-1   2024-09-27 [1] CRAN (R 4.4.1)
#>    sandwich            3.1-1      2024-09-15 [1] CRAN (R 4.4.1)
#>    scales              1.3.0      2023-11-28 [1] CRAN (R 4.4.1)
#>    see                 0.9.0      2024-09-06 [1] CRAN (R 4.4.1)
#>    sessioninfo         1.2.2      2021-12-06 [1] CRAN (R 4.4.1)
#>    survival            3.6-4      2024-04-24 [2] CRAN (R 4.4.1)
#>    TH.data             1.1-2      2023-04-17 [1] CRAN (R 4.4.1)
#>    tibble              3.2.1      2023-03-20 [1] CRAN (R 4.4.1)
#>    tidyselect          1.2.1      2024-03-11 [1] CRAN (R 4.4.1)
#>  D TMB                 1.9.15     2024-09-09 [1] CRAN (R 4.4.1)
#>    twosamples          2.0.1      2023-06-23 [1] CRAN (R 4.4.1)
#>    utf8                1.2.4      2023-10-22 [1] CRAN (R 4.4.1)
#>    vctrs               0.6.5      2023-12-01 [1] CRAN (R 4.4.1)
#>    withr               3.0.2      2024-10-28 [1] CRAN (R 4.4.1)
#>    xfun                0.49       2024-10-31 [1] CRAN (R 4.4.1)
#>    xml2                1.3.6      2023-12-04 [1] CRAN (R 4.4.1)
#>    xtable              1.8-4      2019-04-21 [1] CRAN (R 4.4.1)
#>    yaml                2.3.10     2024-07-26 [1] CRAN (R 4.4.1)
#>    zoo                 1.8-12     2023-04-13 [1] CRAN (R 4.4.1)
#> 
#>  [1] C:/Users/user/AppData/Local/R/win-library/4.4
#>  [2] C:/Program Files/R/R-4.4.1/library
#> 
#>  D ── DLL MD5 mismatch, broken installation.
#> 
#> ──────────────────────────────────────────────────────────────────────────────

@sj-perry
Copy link

sj-perry commented Dec 15, 2024

Does installing {qqplotr} solve the issue?

It sure does! Thanks @mattansb !

@mattansb
Copy link
Member

This means our default qq plot method is broken :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 investigators ❔❓ Need to look further into this issue Bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants