Skip to content

Make the pipeline compatible with shots decoupling #7358

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 129 commits into from
May 26, 2025

Conversation

JerryChen97
Copy link
Contributor

@JerryChen97 JerryChen97 commented Apr 30, 2025

Context:
We want to:

  1. qml.set_shots or partial(set_shots, shots=...) works (the latter is the original requirement); ideally we wish it works for all interfaces and all shots/shot vec
  2. previous shots setting should still work as was, until someday we decide to deprecate (ideally we wish they don't even talk with each other)

Description of the Change:

  1. transforms realm is further divided (tbd): anything applied to the qnode is a user transform; (placeholder for outer); (placeholder for inner)
  2. the logic relationship between the three layers of transformers:
user_transform(
  `_setup_transform_program` would decide 
  outer_transform(
    inner_transform -> presented to `run` and do whatever defined there
  ) // outer_post_processing
) // user_post_processing

Benefits:
Further clearance and decoupling for pipeline!

Possible Drawbacks:

Related GitHub Issues:
[SC-90145]

@JerryChen97 JerryChen97 added the WIP 🚧 Work-in-progress label Apr 30, 2025
@JerryChen97 JerryChen97 self-assigned this Apr 30, 2025
@JerryChen97
Copy link
Contributor Author

Some technical details about what this will look like.

Right now, our execution pipeline has two transform program components. The "outer" and the "inner". The outer is run before we hit the ML boundary, while the inner is run after. The outer always contains the user transforms. If we are using a device derivative, the outer also contains device preprocessing. If we are not using device derivatives, the inner contains device preprocessing. The inner transform program also contains caching and conversion to numpy.

We now need to break this into three transform program stages. The user, the outer device, and the inner.

Right now we resolve the execution config and setup the transform programs based on the initial tape. We need to instead run the user transforms, then resolve the execution pipeline, then setup the inner and outer transform programs.

This change will cause some minor changes in tests, but I've double checked, and they are are all acceptable differences. This will also solve some long standing bugs, and allow us to get rid of this patch:

Once we update the workflow, we will also need to update construct_batch and get_transform_program. We will now always be applying final transforms at the end of the user transform program, and before device preprocessing.

@JerryChen97

This comment was marked as outdated.

Copy link
Contributor

@albi3ro albi3ro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either in test_set_shots to test_qnode, can we add some tests where we use the device tracker to make sure we are choosing the best diff method appropriately?

For example, starting with a device with shots, the overriding to shots=None, and double check we only have one execution when taking the gradient?

Or starting with shots=None, overriding with finite shots, and checking we have 1+2*num_parameters executions when taking the derivative.

@JerryChen97
Copy link
Contributor Author

Either in test_set_shots to test_qnode, can we add some tests where we use the device tracker to make sure we are choosing the best diff method appropriately?

For example, starting with a device with shots, the overriding to shots=None, and double check we only have one execution when taking the gradient?

Or starting with shots=None, overriding with finite shots, and checking we have 1+2*num_parameters executions when taking the derivative.

Done. Added in test_set_shots

@JerryChen97 JerryChen97 requested a review from albi3ro May 23, 2025 21:06
Copy link
Contributor

@albi3ro albi3ro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your patience 👍 Looks good :)

Copy link
Contributor

@andrijapau andrijapau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very satisfying, nice job! 🚀

@JerryChen97 JerryChen97 enabled auto-merge May 26, 2025 15:08
@JerryChen97 JerryChen97 added this pull request to the merge queue May 26, 2025
Merged via the queue into master with commit c7ecdea May 26, 2025
53 checks passed
@JerryChen97 JerryChen97 deleted the shots-decoupling/pipeline branch May 26, 2025 15:48
github-merge-queue bot pushed a commit that referenced this pull request Jul 4, 2025
…cheme (#7461)

**Context:**
Besides `execution` (which has been already adjusted
#7358 we still have another
core helper `construct_batch` to be adjusted to follow the correct
logic:
*Before resolution of diff methods, apply all the user transforms*

Previously, we have all the transforms computed after execution config
resolution and also had something unclear to us called
`final_transform`. Due to the new shot s setup system, the new transform
`set_shots`, it is not valid any more to keep user transforms after the
resolution stage; even from a pure performance point of view, it is not
wise to carry out too much work that does not even need to be done if
the level is specifically just "user" or a slice that range within
`user` inclusively, because we just do not need this part of information
processing and should have branched out way before.

Specifcally, with `slice` being either `None` or `"top"`, our
expectation for this batch constructor is actually there's no transform
atl and it should do nothing (previously it is required to carry over
the whole pipeline only because our deep coupling between `Shots` and
`device`, so there's always a dilemma that one device could be either
finite shot or analytic even if there's no transform atl so it requires
resolution etc, which is already gone after the separation of these two
concepts).

**Description of the Change:**
1. `level`: we don't pass the value of `level` to other helpers anymore.
Instead, we just use it as a normal immutable external inside
`batch_constructor`. For certain special cases, the empty case (`None`
and `"top"`) where we just do nothing and return, and the simple user
case (`"user"` or int/slice that's completely inclusively within the
range of user transforms) where we apply the transforms requested and
immediately return, they will branch out without interfere with config
resolution anymore.
2. lots of information indicators, mostly booleans, have been separately
abstracted and extracted out of `batch_constructor`. They used to be
used inside the following up helpers, secretly and fully encapsulated by
other methods' interfaces. Now we just pre-process all of them, except
those diff method related stuff.
3. For general slices, we just dynamically split the transforms into
user and nonuser. Still all user transforms will be applied before
resolution and the rest will be applied afterwards. We might need to ask
ourselves if this is *compatible* with other packages.
4. Slightly refactored structure of `get_transform_program` by
extracting the big chunk of level interpretation to a separate helper
method.
5. Dropped VIP treatment for `final_transform`, and accordingly adjusted
one test suite `test_final_transform` that was influenced by this
change.

**Benefits:**
Much clearer pipeline to indicate `construct_batch` how ti works just
within itself. Less redundant execution config processing.

**Possible Drawbacks:**
Way less OOP than before. However, due to the volatile essence of this
part of pipeline, I guess being more procedural is some necessary evil.

Also, i didn't change the `get_transform_program` since it's more like a
self-contained public api now.

**Related GitHub Issues:**
[sc-91642]

---------

Co-authored-by: Christina Lee <[email protected]>
Co-authored-by: Isaac De Vlugt <[email protected]>
Co-authored-by: Josh Izaac <[email protected]>
Co-authored-by: Andrija Paurevic <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not merge ⚠️ Do not merge the pull request until this label is removed review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants