Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Algorithm doesn’t capture Transfer Entropy when the lag is strictly above 1 #7

Open
HaskDev0 opened this issue Jul 13, 2024 · 4 comments

Comments

@HaskDev0
Copy link

Dear Contributors/community,

I noticed an unwanted behavior (as I would name it) when trying to apply the te_compute function for the following case:

Assume X is some random time series. Then define Y as being np.roll(X,-1), meaning that Y would actually contain information about X one step before, so the function te_compute(X,Y, embedding=1) computes transfer entropy and it’s pretty high which is expected.

Now, of we compute te_compute(X,Y,embedding=2) it still sees that there is a transfer of information from Y to X.

But, if I define Y to be np.roll(X,-2), then te_compute(X,Y,embedding=2) doesn’t see the information transfer which is reflected in the output value being close to 0 or even negative.

Does somebody know where the problem might be?

@Gilnore
Copy link

Gilnore commented Nov 12, 2024

I was looking over the code and it seems they didn’t include any lags of Y. So histories of Y isn’t accounted for at all.

I was going to raise a issue for that, and ask if that mattered, it seems your question have answered mine.

@HaskDev0
Copy link
Author

@Gilnore , could you show this part of the code? Because lag=1 I still think is accounted because it matches to what is expected.

Or do you already have any ideas of including more lags into the code?

@Gilnore
Copy link

Gilnore commented Nov 14, 2024

@Gilnore , could you show this part of the code? Because lag=1 I still think is accounted because it matches to what is expected.

Or do you already have any ideas of including more lags into the code?

I looked at the helper.py, if you interpret the k in xky space as the lags of x. The x and y as what they are, then substitute the word lag whenever they say embedding, it becomes clear.

How ever, I am currently being stumbled by the need of using the conditional joint entropy H(X(t)|X(hist),Y(hist)), since they appear to have done the search in the space encompassing all points, while having a joint probability means you look at the points in the intersection. Since I don’t know how to construct a set that represents the elements occurring with probability P(X(t)|X(hist)), and there’s no escaping that because of the joint space thing, I don’t have any ideas on where to add the extra lags.

Edit: I found this paper that wrote transfer entropy as some sum of 4 different entropies that don’t need conditional probabilities, I believe that would clear things up.

https://www.researchgate.net/publication/45641250_Transfer_entropy-a_model-free_measure_of_effective_connectivity_for_the_neurosciences

I read up on information theory a bit and found this is just an application of Shannon entropy chain rule. That means we’ll need to search in a joint space of Xt, X, and Y.

@Gilnore
Copy link

Gilnore commented Nov 17, 2024

@Gilnore , could you show this part of the code? Because lag=1 I still think is accounted because it matches to what is expected.

Or do you already have any ideas of including more lags into the code?

Try replacing the currently used y with y and its embeddings everywhere. You might need to adjust some dimensions, but it should buff out. I realized this after looking up what Taken's embedding is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants