-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
code about BN #1
Comments
Hi, thank you for having an interest in our research. |
Thank you very much, I don't know much about F#, is there any way to visualize the factor graph in your code |
Sorry, but current FunProbe does not have an option for dumping factor graphs. However, you may try to fix FunProbe to dump the graphs. For example, you can insert the below lines just after
|
Work is very excellent! could you add some comments to the code, add some visualization work, so that developers can continue to follow these work.Thank you! |
Thank you for the suggestion! I don't have the time to work on it right now, but I plan to improve the mentioned aspects later. |
I noticed that in the implementation, the shallow constraint was implemented as a one-dimensional tensor and the deep constraint was implemented as a two-dimensional tensor. I would like to ask, what is the reason for this? Meanwhile, in the process of building the graph, the deep constraint is to point srcNode to factor and factor to dstNode. Why? Looking forward to your reply |
The terms shallow and deep constraints are used internally in FunProbe's implementation. The shallow constraint corresponds to P(X | Y), where Y is observed and X is hidden, whereas the deep constraint corresponds to P(X | Y), where both X and Y are hidden. Since the value of one variable is already known, we don't need to have a 2D tensor to represent shallow constraints. This is a kind of optimization implemented in FunProbe.
There are two kinds of nodes in FunProbe: variable node and factor node. Variable nodes explain the probability distribution of each random variable, and factor nodes explain the conditional probability between random variables. So srcNode -> factor -> dstNode represents a conditional relationship between srcNode and dstNode. I hope these answers help you well. Thank you for your interest! |
Thank you for your patient guidance!
|
A Neutral tensor corresponds to P(X = 1 | Y) = 1/2, which means it does not contain any useful probabilistic information. It's usage is to initialize values in the Belief Propagation algorithm. And an AlmostSure tensor corresponds to P(X = 1 | Y) = 0.999. It is for Hints 1 and 2 (please see Table 1, and Section 4.2.1 of my paper). I set 0.999 instead of 1.0 for some reason when I developed FunProbe, but I cannot remember why at this time. Since their usages are different from those of Positive or Negative tensors, they cannot be replaced.
Even though they have four elements, they are used as 1-D tensors (the last two elements are zeros). Having four elements is just for development convenience. Thank you. |
Thank you very much for your task, I have been following up your research recently, may I ask where is the code related to Bayesian networks? I didn't see the code for the Bayesian network model
The text was updated successfully, but these errors were encountered: