You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in the README.md / Usage / implementing the workers, the third rule says:
After the model has been built and the parameters initialized, initialize the central parameters by calling the Worker's init_shared_params() method. Every worker should call this method.
It calls init_shared_params() only if param_sync_api is set.
But by default setting, param_sync_api=None.
And I didn't find EASGD call init_shared_params.
Does the program wrong? Or there are other rules not stated? @tsirif
The text was updated successfully, but these errors were encountered:
I roughly chatted with Christos @tsirif about platoon a couple of days ago, and hope my understanding will help.
In the all_reduce api, the update algorithm is in global_dynamics. The updates will be done by all_reduce itself, which doesn't need init_shared_params as in a param_sync_api. (Looking into synchronous lstm example to see how it works).
According to my experience, my suggestion is that you can use param_sync_api to build your single-node, multi-GPUs training framework, unless you want to use the all_reduce api in a multi-node scenario.
There might be some bugs remains in multi-node scenario. If you really want to use it, you can try this branch to see if it works for you.
in the README.md / Usage / implementing the workers, the third rule says:
But in example/lstm/lstm_worker line 517-528:
It calls
init_shared_params()
only ifparam_sync_api
is set.But by default setting,
param_sync_api=None
.And I didn't find EASGD call
init_shared_params
.Does the program wrong? Or there are other rules not stated?
@tsirif
The text was updated successfully, but these errors were encountered: