You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm trying to use the actnn with maddpg (an MARL algorithm). The model just has 3 layers with activation function relu. If so, can you let us know if this mechanism will give the results with smaller models.
I mean, when I train the maddpg, almost 10GB of memory gets used. So, I wanted to try some compression functions. It would be a great help if you can provide some insights on how to test it with the maddpg if you have any idea.
You can try to follow the usage and replace the layers in your model with actnn layers. You can start with higher bits and see whether the lossy compression hurts reward.
Hello, I'm trying to use the actnn with maddpg (an MARL algorithm). The model just has 3 layers with activation function relu. If so, can you let us know if this mechanism will give the results with smaller models.
Thank you.
Link of maddpg https://github.com/marlbenchmark/off-policy/tree/release/offpolicy/algorithms/maddpg
The text was updated successfully, but these errors were encountered: