-
Notifications
You must be signed in to change notification settings - Fork 531
Introduce annotate_custom_sharding binding #9203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
32344f3
to
a16b53e
Compare
a16b53e
to
93e8632
Compare
93e8632
to
da5885d
Compare
Hey @tengyifei! Let me know if you're able to review this, or feel free to add anyone else. We have this as a requirement to unblock our use case above, and we might consider cherry picking it to 2.7.1, since it is a blocker. |
@bhavya01 @zhanyong-wan Do you have cycles on this? We need this to unblock some of our use cases. |
@bhavya01 @ysiraichi Hey folks, stale PR - let me know if you can review. |
I will take a look at this later today! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not really familiar with sharding, but I do have one question: wouldn't it be better to allow mark_sharding
to be called multiple times, instead of creating a new API? I'm asking this question because, looking at the test, it's almost like mark_sharding
and annotate_custom_sharding
did the same thing (i.e. would a user know when to use which?).
XLA_CHECK(UseVirtualDevice()) | ||
<< "Please enable SPMD via `torch_xla.runtime.use_spmd()`"; | ||
XLA_CHECK(sharding.type() != xla::OpSharding::UNKNOWN) | ||
<< "Can't explicilty annotate with UNKNOWN sharding type."; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we need more checks, here? What happens if we call annotate_custom_sharding
before mark_sharding
? Is this supposed to work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it should work - but the intended behavior here could be ambiguous. At that point, mark_sharding
would similarly just be adding a custom sharding op, since the provided tensor has an IR custom sharding value/node - not a device data node. If we want the mark_sharding
following it to identify the parent Device Data node IR and invoke a sharded data transfer - that is something we can discuss/evaluate.
The problem is that, in most cases, mark_sharding
is intended to be used as expected for any Device Data node, since it'll have an async runtime buffer alloc over the sharded segments only. In this case, each device would have its own sharded data, and that'll be reflected on the HLO inputs. But what if users want to explicitly have an XLA IR custom sharding over a device data node - that including a replicated tensor as an input to the graph, or simply additional annotations that relies on the XLA compiler to add the optimal CC ops to accommodate the hint.
At the moment, This means that it's not possible to provide a custom sharding annotation (on XLA) over an intentionally replicated tensor on the device. Similarly, as the motivation above mentions, it also limits us to provide a custom sharding annotation over a Device Data node (weights, inputs, etc) that have already been sharded to the device - since it can't disambiguate if a user meant to reshard the tensor, or simply add a custom sharding op. This API intended to provide these gaps to a user - and is intentionally targeting more familiar users who need to provide extra sharding annotation to XLA around the limitations above on their tensors. I think it's a well defined API that can be served for different use cases. If you have a new tensor on the host, I am happy to consider relaxing the |
@ysiraichi Do you want to take another look before merging? |
This PR adds a new binding API
annotate_custom_sharding
that allows annotating an existing tensor with a custom sharding IR node without modifying its data layout. This is useful for cases where a tensor has already been sharded withmark_sharding
but needs additional sharding annotations for compiler optimizations.Unlike the existing
mark_sharding
function,annotate_custom_sharding
only adds the annotation to the XLA IR without changing the underlying data distribution, enabling more flexible sharding strategies to be provided to XLA. This is particularly useful for introducing resharding annotations on already-sharded tensors.Use Case
There are instances where we want to provide an explicit annotation hint around a kernel with manual sharding. In this case, we are limited to introducing custom sharding hints to XLA prior to the manual resharding. For instance, if we have FSDP + TP, and we wish to gather all weights across the FSDP dimension prior to the kernel, this is not possible. This PR allows us to introduce such functionality and flexibility, by redefining the sharding spec associated with the IR prior to the manual sharding.