-
-
Notifications
You must be signed in to change notification settings - Fork 749
Closed
Labels
bugSomething is brokenSomething is brokendiscussionDiscussing a topic with no specific actions yetDiscussing a topic with no specific actions yet
Description
Describe the issue: After multiple attempts, including setting the LocalCluster in "processes" mode to multiple combinations of n_workers and threads_per_worker, using Dask Dataframes with MapPartitions, Dask Bags and Dask Arrays with simple snippets of code. The CPU does not go over 40%, only 16 cores are working but capped below 100%.
Minimal Complete Verifiable Example:
import dask.array as da
import numpy as np
from dask.distributed import Client, LocalCluster
# Create a local Dask cluster
cluster = LocalCluster(n_workers=24, threads_per_worker=1)
client = Client(cluster)
dask_array = dask.array.random.random((1000000, 100000))
# Perform a computation on the Dask array
result = dask_array.sum()
# Compute and get the result
print(result.compute())
# Close the Dask client and cluster
client.close()
cluster.close()Anything else we need to know?:
Environment:
- Dask version: 2025.4.1
- Python version: 3.13.3
- Operating System: Windows 11 24H2
- Install method (conda, pip, source): pip
- CPU: 13th Gen Intel(R) Core(TM) i9-13950HX, 2200 Mhz, 24 Cores, 32 Threads
- Workstation: Laptop
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething is brokenSomething is brokendiscussionDiscussing a topic with no specific actions yetDiscussing a topic with no specific actions yet

