-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove legacy shuffle, add docs for distributed testing #19
Conversation
@@ -77,7 +77,7 @@ def _get_worker_inputs( | |||
plan_bytes = datafusion_ray.serialize_execution_plan(stage.get_execution_plan()) | |||
futures = [] | |||
opt = {} | |||
opt["resources"] = {"worker": 1e-3} | |||
# opt["resources"] = {"worker": 1e-3} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to remove this; otherwise, the Ray cluster could not find a suitable worker node.
@franklsf95 do we still need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only works when you start a Ray cluster with custom resources, e.g. the head node with ray start --head --resources='{"head":1}'
and worker nodes each with ray start --resources='{"worker":1}'
. I had this resource requirement to make sure the tasks run on worker nodes exclusively (for fair benchmarking). If we don't do this, the task could also run on the driver node. Depending on the use case, this may be harmless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, have few minor comments
@@ -77,7 +77,7 @@ def _get_worker_inputs( | |||
plan_bytes = datafusion_ray.serialize_execution_plan(stage.get_execution_plan()) | |||
futures = [] | |||
opt = {} | |||
opt["resources"] = {"worker": 1e-3} | |||
# opt["resources"] = {"worker": 1e-3} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a comment here is useful?
opt["resources"] = {"worker": 1e-3} | ||
if use_ray_shuffle: | ||
opt["num_returns"] = output_partitions_count | ||
#opt["resources"] = {"worker": 1e-3} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
## Start Ray Worker Nodes(s) | ||
|
||
```shell | ||
ray start --address=10.0.0.23:6379 --redis-password='5241590000000000' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you need to start redis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, this is built into ray somehow and this is the password that it uses
This PR removes the legacy shuffle reader/writer that only work on a single node.
I also added brief documentation on distributed testing.