-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto-regression #119
Comments
Well, I'm doing a AR-HMM for protein conformation in my dissertation and I'm using your code as a base. Initially, I'm only doing an AR(1)-HMM(1) with a categorical observation distribution for simplicity sake (as I need only a proof of concept), but I'm willing to generalize for a AR(n)-HMM(m), so... Right now I'm trying to adapt your Viterbi for a n-best Viterbi and afterward I want to adapt them both so that it doesn't goes from 1 to N, but from k to N, given the the k-1 previous values of the parameter process, i.e. (Of course, the second one is only different from the regular Viterbi if the MC is of higher order and/or non-homogeneous. If I understood your code correctly, I can make a non-homogeneous MC using control dependency and adding a control variable to To be honest, right now my code is pure pasta and still very experimental, but if you have any interest in this, give me a shout and we can try to make this work. |
Hi @fausto-mpj, glad to hear you're using HiddenMarkovModels.jl! It should be possible to adapt my code and take into account the previous observation After that, I don't know whether the jump in complexity from memory-1 to memory-k is justified. In practice, you can always simulate memory-k with memory-k by encoding the state (or the observation, or both) as a tuple including previous values. What do you think? |
To be honest, I didn't check if your package supported sparse matrices, but I just found out that it does. Nice! Definitely, a higher-order MC In a AR( For a sequence with length N, the Factorization Theorem tell us that For some of these quantities the estimation is quite similar to what Forward and Forward-Backward are doing. However, I don't know if Right now I'm using the previous observation as control variable, but I will look into your suggestion to use Afterwards, I have some questions to ask, because I'm getting a slightly different |
My proposal is to handle the AR(1)-HMM(1) case in the package itself, and then let users translate from AR(n)-HMM(m) to AR(1)-HMM(1) by tuple-ifying whatever they need to do. We will be able to do this for inference without too much trouble. The only requirement is to generalize I don't think we'll be able to code a generic learning function though, for the same reason as with controls: the M update is no longer explicit with AR dependencies, except in very special cases (discrete observations would probably work). So we might have to accept that the user needs to write |
Right, I've been tinkering with the code and got some obstacles, so I might need some help. I've added an optional parameter for The main problem I'm facing is with However, for If If But this is too confusing. Ideally, there should be a way that works for both cases. I was looking at Distributions.jl |
As you might have noticed, the inference code in this package is completely agnostic to whether I also don't think we should modify the built-in I created a branch obs_distributions(hmm, control, prev_obs) with sensible fallbacks. I have started to adapt the main inference routines to pass the previous observation to I don't have a good answer yet for what happens at the first time step. What do they usually do in the literature? |
Nice. I've just cloned the branch and will look into it. In the literature, I got the following:
I believe that encouraging the user to provide a distribution for |
It would be interesting to see if we can implement HMMs where the emission also depends on the previous emission
The text was updated successfully, but these errors were encountered: