Skip to content

Ensure all execution policies are also environments#8155

Open
miscco wants to merge 4 commits intoNVIDIA:mainfrom
miscco:execution_policy_with_env
Open

Ensure all execution policies are also environments#8155
miscco wants to merge 4 commits intoNVIDIA:mainfrom
miscco:execution_policy_with_env

Conversation

@miscco
Copy link
Contributor

@miscco miscco commented Mar 24, 2026

We want to be able to pass additional information to execution policies.

We already settled on the sender & receiver approach with queries and key value pairs.

This refactors the current approach to turn __execution_policy_base into a proper environment. That has the advantage that we do not need to handle the cuda execution policy in a special way.

The only cuda specific code is when we want to ensure that streams are passed as cuda::stream_ref and memory resources as cuda::mr::resource_ref

This is currently failing some tests we had in place that relied on special handling of the cuda policy wrt to memory resources

I am not sure if that is something we want to change?

@miscco miscco requested review from a team as code owners March 24, 2026 16:09
@miscco miscco requested a review from jrhemstad March 24, 2026 16:09
@github-project-automation github-project-automation bot moved this to Todo in CCCL Mar 24, 2026
@cccl-authenticator-app cccl-authenticator-app bot moved this from Todo to In Review in CCCL Mar 24, 2026
@github-actions

This comment has been minimized.

@miscco miscco force-pushed the execution_policy_with_env branch 8 times, most recently from 885ac00 to 4e7dbd5 Compare March 25, 2026 14:26
We want to be able to pass additional information to execution policies.

We already settled on the sender & receiver approach with queries and key value pairs.

This refactors the current approach to turn `__execution_policy_base` into a proper environment. That has the advantage that we do not need to handle the cuda execution policy in a special way.

The only cuda specific code is when we want to ensure that streams are passed as `cuda::stream_ref` and memory resources as `cuda::mr::resource_ref`
@miscco miscco force-pushed the execution_policy_with_env branch from 8322290 to 228e42c Compare March 25, 2026 16:51
@github-actions

This comment has been minimized.

@miscco miscco force-pushed the execution_policy_with_env branch from 228e42c to 8bdc9c9 Compare March 25, 2026 19:10
@miscco miscco requested a review from a team as a code owner March 25, 2026 19:10
@miscco miscco force-pushed the execution_policy_with_env branch from 8bdc9c9 to 08e209d Compare March 25, 2026 19:17
@github-actions

This comment has been minimized.

@github-actions
Copy link
Contributor

🥳 CI Workflow Results

🟩 Finished in 1h 24m: Pass: 100%/137 | Total: 2d 03h | Max: 58m 04s | Hits: 96%/279810

See results here.

if constexpr (!is_lvalue_reference_v<_Env> && !::cuda::mr::__is_resource_ref<remove_cvref_t<_Env>>)
{ // The user passed a prvalue, which indicates we should own the resource
return __with(prop{::cuda::mr::get_memory_resource,
::cuda::mr::any_resource<::cuda::mr::device_accessible> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we template the policy anyway could we just keep the original resource type?

if constexpr (!is_lvalue_reference_v<_Value> && !::cuda::mr::__is_resource_ref<remove_cvref_t<_Value>>)
{ // The user passed a prvalue, which indicates we should own the resource
return __with(prop{__tag,
::cuda::mr::any_resource<::cuda::mr::device_accessible> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, is it guaranteed to always be device_accessible? I know in general we wanted to support type erased wrappers with no properties, but not sure if it applies here too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Review

Development

Successfully merging this pull request may close these issues.

2 participants