You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a big fan of what we have managed to do with the tools that represent and manipulate discrete distributions. The distr_of_func and expect functions are intuitive, expressive and flexible, even more so when used with labeled distributions.
One limitation of the toolkit is that we do not have similar tools for variables/processes with memory, i.e., markov chains. I think this is due to the fact that in most of our models the income process has a unit root and, after normalizing appropriately, the distribution of shocks does not depend on current or past states.
Models in which some process $z_t$ (income, health) follows an AR(1) process discretized into a markov chain are common. When solving these models, one often needs to calculate things like $E[f(z_{t+1})|z_t]$ for every possible $z_t$ knowing that the distribution of $z_{t+1}$ depends on $z_t$. Our current Distribution.expect(lambda z_tp1 : f(z_tp1)) don't allow us to calculate that type of object conveniently, because it is the distribution (not the function) that depends on $z_t$. I think the current solution for this type of thing is just to carry a python list of Distribution objects, one for each $z_t$, and iterate over them as needed.
It would be very convenient if we could have some MarkovChain object that allowed vectorized operations like, say
MarkovChain(lambda z_tp1: f(z_tp1)) to return a vector with the expectation calculated conditional on each value of $z_t$, or MarkovChain(lambda z_tp1: f(z_tp1), current = x) to calculate the expectation conditional on $z_t = x$.
The text was updated successfully, but these errors were encountered:
I am a big fan of what we have managed to do with the tools that represent and manipulate discrete distributions. The
distr_of_func
andexpect
functions are intuitive, expressive and flexible, even more so when used with labeled distributions.One limitation of the toolkit is that we do not have similar tools for variables/processes with memory, i.e., markov chains. I think this is due to the fact that in most of our models the income process has a unit root and, after normalizing appropriately, the distribution of shocks does not depend on current or past states.
Models in which some process$z_t$ (income, health) follows an AR(1) process discretized into a markov chain are common. When solving these models, one often needs to calculate things like $E[f(z_{t+1})|z_t]$ for every possible $z_t$ knowing that the distribution of $z_{t+1}$ depends on $z_t$ . Our current $z_t$ . I think the current solution for this type of thing is just to carry a python list of Distribution objects, one for each $z_t$ , and iterate over them as needed.
Distribution.expect(lambda z_tp1 : f(z_tp1))
don't allow us to calculate that type of object conveniently, because it is the distribution (not the function) that depends onIt would be very convenient if we could have some
MarkovChain
object that allowed vectorized operations like, sayMarkovChain(lambda z_tp1: f(z_tp1))
to return a vector with the expectation calculated conditional on each value ofMarkovChain(lambda z_tp1: f(z_tp1), current = x)
to calculate the expectation conditional onThe text was updated successfully, but these errors were encountered: