You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have recently began investigating the possibility of using Allegro to simulate some materials of interest. I am a bit confused as to how the dimensions of the various MLPs match up and have a few questions that hopefully will be simple to answer.
The initial scalar feature embedding takes Z_i, Z_j, and N_{basis} basis functions into the 2-body MLP to generate the x^{ij,L=0} term. How do we define N_{basis} in the yaml settings file to determine how many basis functions to use?
I am a bit confused with regards to the various MLP dimensions. For instance, if I have the following dimensions defined:
If I understand this correctly, this will define the MLP to have 4 hidden layers, with 128, 256, 512, and 1024 neurons respectively. But how are the input and output layers defined? Is the 1024 the output dimension and therefore x^{ij,L=0} contains 1024 features? What about the input dimensions?
Regarding the initial tensor feature embedding, the equation is given:
where the w_{n,l,p}^{ij,L=0} terms are defined by the MLP_{embed} neural network. Since this neural network has the same argument x^{ij,L=0} for all n,l,p combinations, does this mean that each n,l,p combination has a unique MLP_{embed} with unique weights and biases that are learned for that specific n,l,p choice?
It is my understanding that the V^{ij} terms are higher order objects containing l_{max} irreducible representations of SO(3). Why is there a need to have n channels containing multiple "duplicate" Y_l^m terms of the same order? Wouldn't having a single set of Y_l^m terms be sufficient given that they are being multiplied by the learned weights w_{n,l,p}^{ij,L=0} ? What is the intuition regarding the need for multiple channels and how many channels are "sufficient" for the representation?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello everyone,
I have recently began investigating the possibility of using Allegro to simulate some materials of interest. I am a bit confused as to how the dimensions of the various MLPs match up and have a few questions that hopefully will be simple to answer.
The initial scalar feature embedding takes Z_i, Z_j, and N_{basis} basis functions into the 2-body MLP to generate the x^{ij,L=0} term. How do we define N_{basis} in the yaml settings file to determine how many basis functions to use?
I am a bit confused with regards to the various MLP dimensions. For instance, if I have the following dimensions defined:
If I understand this correctly, this will define the MLP to have 4 hidden layers, with 128, 256, 512, and 1024 neurons respectively. But how are the input and output layers defined? Is the 1024 the output dimension and therefore x^{ij,L=0} contains 1024 features? What about the input dimensions?
Regarding the initial tensor feature embedding, the equation is given:
where the w_{n,l,p}^{ij,L=0} terms are defined by the MLP_{embed} neural network. Since this neural network has the same argument x^{ij,L=0} for all n,l,p combinations, does this mean that each n,l,p combination has a unique MLP_{embed} with unique weights and biases that are learned for that specific n,l,p choice?
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions