-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DRAFT] adding support for the {mmrm} package #1000
base: main
Are you sure you want to change the base?
Conversation
Currently my test case fails irrespective of me setting
or not.
I did not open a separate issue in the bug tracker since this is likely a bug caused by my faulty implementation of the {mmrm} interface. |
f28ed19
to
632dfc5
Compare
Thanks for taking the time to do this, I really appreciate it! The
Does your example work when you just supply data frame directly: |
Thanks, just adding some thoughts here. Does your example work when you just supply data frame directly: newdata=fev_data? The question to me is what the general default strategy regarding missing data should be. The element |
Ideally, on my end, I would like to go through Also, looking at the code quickly, I suspect that the reason predictions don't work in the current PR is that It might also be worth checking if you really need all those methods. Sometimes, the defaults work perfectly well. Just try something like: library(marginaleffects)
get_predict.default(fit) |
Thanks for the feeback @vincentarelbundock, made some adjustments and starting to wrap my head around this. If I see it correctly we still have two things to address here.
|
I was looking at this PR today and noticed that the current version does not return standard errors. The underlying reason for this is that the Jacobian is full of zeros: library(marginaleffects)
library(mmrm)
fit <- mmrm(FEV1 ~ RACE + SEX + ARMCD * AVISIT + us(AVISIT | USUBJID),
data = fev_data,
weights = fev_data$WEIGHTS,
method = "Kenward-Roger")
p <- predictions(fit)
attr(p, "jacobian") The reason for that, is that One possibility is that this is related to the problem with I don't actually know if this is related, but I noticed that both |
Thanks @vincentarelbundock and @kkmann and all, just looking at this thread as a bystander - yes, structurally the two packages will work similarly. However I also wonder if in this case, because we use TMB, there would be much better automatic differentiation ways to generate the Jacobian you need. |
@danielinteractive Thanks for chiming in! Yes, that would certainly be more efficient. I think it makes sense for very common quantities like predictions. But the issue is that My guess is that this would require a ton of work on the backend. But then again, I don't know TMB at all, so... |
PR created at kkmann#1 to fix the issue that the jacobian is 0 reason is that predict in mmrm is always returning a conditional prediction (conditional on observed values) |
Sorry I left this PR to linger for so long. I looked into this today, and things mostly seem to work, but there are two lingering issues. The first is probably easy to solve. Not sure about the second. library("mmrm")
fit <- mmrm(
formula = FEV1 ~ RACE + SEX + ARMCD * AVISIT + us(AVISIT | USUBJID),
data = fev_data
)
|
Thanks @vincentarelbundock , looking into this now on the |
Thanks again @vincentarelbundock! While we will be able to fix the first problem quickly (indeed this is a bug in |
* issue1204 * test fix
* support glmtoolbox::glmgee() * fixup test * test fixup * supported models
* support glmtoolbox::glmgee() * fixup test * test fixup * supported models
Cool. Thanks for taking a look and finding a solution for issue 1. I just spend a bunch of time trying to find a place to insert a hack that would append some random string to the subject ID column. Unfortunately, it's very hard. The problem is that this stacking logic is very "deep" in the So in order to support this, I would have to insert complex hacks in a dozen places in the code base, and don't think that's sensible design for software that wants to support 100+ model classes. To be sure, that's less convenient, but maybe it's not the end of the world that users have to define a single contrast they are interested in, rather than get all of the them at once. This can probably be documented. |
Hi @vincentarelbundock , I discussed the issue openpharma/mmrm#461 further with @clarkliming , and it seems that a more principled solution to that will likely also fix the issue 2 here. So let us first sort that out and maybe both issues 1 and 2 will be solved in |
Oh wow, OK. I really appreciate your help in working on this. It'll be really cool.if we can push this through the fi ish line |
Hi,
this is an attempt to implement support for the {mmrm} package https://github.com/openpharma/mmrm following the steps outlined in the {marginaleffects} documentation https://marginaleffects.com/vignettes/extensions.html#marginaleffects-extension.
I am still unclear about the role of the option setting.
Is it expected that the user triggers this everytime they want to use {marginaleffects} with {mmrm}? This does not seem to be the case for other extensions.
My test case is
Also tagging @lang-benjamin @danielinteractive