Replies: 3 comments 2 replies
-
Hi @clithero, the If you could try with Instead, you can also try to use |
Beta Was this translation helpful? Give feedback.
-
This a suggestion related to the following PR (#370). But since we were having some merge issues, for now, I figured I would share a gist link to a workaround for estimating simple DDMs without using the HSSM interface, which may be useful for those running into trouble. Kiante |
Beta Was this translation helpful? Give feedback.
-
Thanks that is very helpful. We know some others who have been successfully
using this type of hybrid method while we work out the kinks. But for the
record, a few of us have found the following set up to work fine
estimating parameters hierarchically from synthetic data (I recently
confirmed that I can recover parameters, including group means and
variances, and show appropriate shrinkage and better recover of individuals
when there are very low trial counts but multiple subjects, as in original
HDDM paper).
This might not always work for empirical data and note here some of the
priors I just used normal or halfnormal distributions when they can
probably be better served as other forms (e.g. inverse logit for z), and
there are some link transform settings that can sometimes help, but for the
case below everything seemed to work. The priors (including sigmas) should
be chosen to be in a reasonable range as to what can be expected from the
data, though most of these are similar to priors in HDDM.
n_subjects = 12 # number of subjects, varied up to 25
n_trials = 30 # number of trials per subject, varied from 20 to 200
sd_v = 0.3 # sd for v's, also use for a
mean_v = 1.25 # mean for v-intercept
sd_tz=0.1 # sd for t and z
mean_a = 1.5
mean_t = 0.5
mean_z = 0.5
data_list = []
param_list =[]
for i in range(n_subjects):
# Make parameters for subject i
intercept = np.random.normal(mean_v, sd_v, size=1)
x = np.random.uniform(-1, 1, size=n_trials)
y = np.random.uniform(-1, 1, size=n_trials)
v_x = np.random.normal(0.8, sd_v, size=1)
v_y = np.random.normal(0.2, sd_v, size=1)
v = intercept + (v_x * x) + (v_y * y)
a = np.random.normal(mean_a, sd_v, size=1)
z = np.random.normal(mean_z, sd_tz, size=1)
t = np.random.normal(mean_t, sd_tz, size=1)
true_values = np.column_stack(
[v, np.repeat(a, axis=0, repeats=n_trials), np.repeat(z, axis=0,
repeats=n_trials), np.repeat(t, axis=0, repeats=n_trials)]
)
# Simulate data
obs_ddm_reg_v = hssm.simulate_data(model="ddm", theta=true_values, size=1)
# store ground truth params
param_list.append(
pd.DataFrame(
{
"intercept": intercept,
"v_x": v_x,
"v_y": v_y,
"a": a,
"z": z,
"t": t,
}
)
)
# Append simulated data to list
data_list.append(
pd.DataFrame(
{
"rt": obs_ddm_reg_v["rt"],
"response": obs_ddm_reg_v["response"],
"x": x,
"y": y,
"subject": i,
}
)
)
# Make single dataframe out of subject-wise datasets
dataset_reg_v_hier = pd.concat(data_list)
dataset_reg_v_hier
model_reg_v_ddm_hier1 = hssm.HSSM(
data=dataset_reg_v_hier,
include=[
{
"name": "v",
"formula": "v ~ 1 + x + y + (1 + x + y| subject)",
"prior": {
"Intercept": {"name": "Normal", "mu": 2, "sigma": 3},
"x": {"name": "Normal", "mu": 0, "sigma": 1},
"y": {"name": "Normal", "mu": 0, "sigma": 1},
"1|subject": {"name": "Normal",
"mu": 0, # using non-centered approach so mu's of indiv subject offsets
should be 0
"sigma": {"name": "HalfNormal",
"sigma": 2,
},
},
"x|subject": {"name": "Normal",
"mu": 0,
"sigma": {"name": "HalfNormal",
"sigma": 1,
},
},
"y|subject": {"name": "Normal",
"mu": 0,
"sigma": {"name": "HalfNormal",
"sigma": 1,
},
},
},
"link": "identity",
},
{
"name": "t",
"formula": "t ~ 1 + (1 | subject)",
"prior": {
"Intercept": {"name": "HalfNormal", "sigma": 0.4},
"1|subject": {"name": "Normal",
"mu": 0,
"sigma": {"name": "HalfNormal",
"sigma": 1,
},
},
},
"link": "identity",
},
{
"name": "z",
"formula": "z ~ 1 + (1 | subject)",
"prior": {
"Intercept": {"name": "HalfNormal", "sigma": 1},
"1|subject": {"name": "Normal",
"mu": 0,
"sigma": {"name": "HalfNormal",
"sigma": 0.05,
},
},
},
},
{
"name": "a",
"formula": "a ~ 1 + (1 | subject)",
"prior": {
"Intercept": {"name": "HalfNormal", "sigma": 2.5},
"1|subject": {"name": "Normal",
"mu": 0,
"sigma": {"name": "HalfNormal",
"sigma": 1,
},
},
},
},
],
noncentered = True,
)
model_reg_v_ddm_hier1
…On Mon, May 20, 2024 at 2:58 PM Kianté Fernandez ***@***.***> wrote:
This a suggestion related to the following PR (#370
<#370>). But since we were having
some merge issues, for now, I figured I would share a gist link to a
workaround for estimating simple DDMs without using the HSSM interface,
which may be useful for those running into trouble.
Example of how a PyMC model of the standard DDM with parameters specified
to be estimated hierarchically.
<https://gist.github.com/kiante-fernandez/a13986731a361b98d8e409fd51340841>
Kiante
—
Reply to this email directly, view it on GitHub
<#411 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG7TFHGZDC3XUOWPMQ6MMDZDJBU5AVCNFSM6AAAAABHHZ6EZWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TIOJZHEYDE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hello,
We are struggling to get models to work where all of the DDM parameters are estimated hierarchically.
A couple of posts look relevant, but we did not find our solution in them:
Add default model settings that turn every parameter hierarchical #223
add argument to make everything hierarchical (like hddm did by default) #363
On our own data, if we run a model with drift rate v estimated hierarchically, we get reasonable results and R-hats near 1. If we add in non-decision time t to be estimated in a hierarchical way, we get very high R-hats for all t parameters and also problematic estimates elsewhere. We thought it might be our data, but we can replicate this with the Cavanagh data included in the HSSM package. So, this makes us think we are surely doing something wrong!
As the simplest example, running this (HSSM version 0.2.0) creates the problem :
We end up with something like the following:
Adding in other parameters (e.g. v) gives similarly problematic results, so long as t is in the model hierarchically.
So, in summary, two questions:
Again, thanks so much for this great package. Apologies for asking a rather basic question!
Beta Was this translation helpful? Give feedback.
All reactions