Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

code about BN #1

Open
hackhaye opened this issue Jun 7, 2024 · 9 comments
Open

code about BN #1

hackhaye opened this issue Jun 7, 2024 · 9 comments

Comments

@hackhaye
Copy link

hackhaye commented Jun 7, 2024

Thank you very much for your task, I have been following up your research recently, may I ask where is the code related to Bayesian networks? I didn't see the code for the Bayesian network model

@soomin-kim
Copy link
Collaborator

Hi, thank you for having an interest in our research.
In FunProbe, we use a factor graph to represent Bayesian Network, and you can find the relevant definitions from

@hackhaye
Copy link
Author

Thank you very much, I don't know much about F#, is there any way to visualize the factor graph in your code

@soomin-kim
Copy link
Collaborator

soomin-kim commented Jun 29, 2024

Sorry, but current FunProbe does not have an option for dumping factor graphs. However, you may try to fix FunProbe to dump the graphs. For example, you can insert the below lines just after ConstraintSolver.fs:L87

solver.Graph.FoldSuccessors src (fun _ dst _ ->
  // Do something here
  printfn "Edge: %d -> %d" (Node.id src) (Node.id dst)) ()

@hackhaye
Copy link
Author

Work is very excellent! could you add some comments to the code, add some visualization work, so that developers can continue to follow these work.Thank you!

@soomin-kim
Copy link
Collaborator

Thank you for the suggestion! I don't have the time to work on it right now, but I plan to improve the mentioned aspects later.

@hackhaye
Copy link
Author

I noticed that in the implementation, the shallow constraint was implemented as a one-dimensional tensor and the deep constraint was implemented as a two-dimensional tensor. I would like to ask, what is the reason for this? Meanwhile, in the process of building the graph, the deep constraint is to point srcNode to factor and factor to dstNode. Why?

Looking forward to your reply

@soomin-kim
Copy link
Collaborator

soomin-kim commented Aug 23, 2024

the shallow constraint was implemented as a one-dimensional tensor and the deep constraint was implemented as a two-dimensional tensor

The terms shallow and deep constraints are used internally in FunProbe's implementation. The shallow constraint corresponds to P(X | Y), where Y is observed and X is hidden, whereas the deep constraint corresponds to P(X | Y), where both X and Y are hidden. Since the value of one variable is already known, we don't need to have a 2D tensor to represent shallow constraints. This is a kind of optimization implemented in FunProbe.

the deep constraint is to point srcNode to factor and factor to dstNode.

There are two kinds of nodes in FunProbe: variable node and factor node. Variable nodes explain the probability distribution of each random variable, and factor nodes explain the conditional probability between random variables. So srcNode -> factor -> dstNode represents a conditional relationship between srcNode and dstNode.

I hope these answers help you well. Thank you for your interest!

@hackhaye
Copy link
Author

Thank you for your patient guidance!
I understand what you said about shallow and deep constraints, and I noticed,

  1. There are slight differences in the initialized Tensor under different hints. There are four types of 1D tensors: Positive, Negative, Neutral and AlmostSure. I understand that Positive and Negative are mentioned in the paper. What is the meaning of Neutral and AlmostSure? Why not replace them with positive or negative tensor?
  2. Why does Tensor have four elements?
    Thank you again for your patient guidance!

@soomin-kim
Copy link
Collaborator

What is the meaning of Neutral and AlmostSure? Why not replace them with positive or negative tensor?

A Neutral tensor corresponds to P(X = 1 | Y) = 1/2, which means it does not contain any useful probabilistic information. It's usage is to initialize values in the Belief Propagation algorithm. And an AlmostSure tensor corresponds to P(X = 1 | Y) = 0.999. It is for Hints 1 and 2 (please see Table 1, and Section 4.2.1 of my paper). I set 0.999 instead of 1.0 for some reason when I developed FunProbe, but I cannot remember why at this time.

Since their usages are different from those of Positive or Negative tensors, they cannot be replaced.

Why does Tensor have four elements?

Even though they have four elements, they are used as 1-D tensors (the last two elements are zeros). Having four elements is just for development convenience.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants