-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autodist on Ray using RaySGD API #61
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
other
@@ -77,15 +77,15 @@ def l(predicted_y, desired_y): | |||
# Only save the model on master node if autodist is used with NFS. | |||
checkpoint_suffix = 'c10' | |||
checkpoint_name = checkpoint_dir + checkpoint_suffix | |||
if IS_AUTODIST_CHIEF: | |||
if IS_AUTODIST_CHIEF(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you add a test case (e.g. case c11) that uses the above linear regression code plus ray backend so the CI can test against it every time when there is new case? You might want to add it to both single-node multi GPU test or distributed tests.
|
||
def spawn_replica(replica_host, strategy_builder, strategy=None, env=None): | ||
# Enforce actor placement on the provided host | ||
runner = ray.remote(resources={f"node:{replica_host}": 0.01}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this requires custom resource specification when you do ray up
to start the ray cluster?
This PR adds RaySGD API to Autodist which enables it to train models on a Ray cluster. The API defines a
TFTrainer
class which takes a model creator, data creator, train step and a strategy builder and runs the training job on a distributed Ray cluster. The API follows the RaySGD API and is compatible with Ray Tune.Internally it implements a
TFRunner
class which represents a replica. All communication between master and worker replicas happens through in-memory object store so there is no dependance on remote file system locations/accesses rights. Also ssh is not needed.Moreover the client code executed by each worker is also replicated using Ray eliminating the need of copying the model code to remote filesystems on each node. The users can run the example by installing Ray and running
$ python linear_regression_ray.py
.Reference: https://docs.ray.io/en/master/raysgd/raysgd_tensorflow.html
Fixes #57