Description
Currently Dask supports adaptive scaling. This allows it to scale the number of workers based on load. The logic for how many workers to ask for is based on many things, like how much work we have, how much memory we need, and so on.
I recently observed a cluster scale up beyond the number of available tasks. It shouldn't do this. I had three tasks, each of which took several minutes, and I saw my cluster scale up to ten workers. Probably there should be a maximum applied in there somewhere that wasn't being set. There appears to already be some code to do this (the intent is there) but perhaps there is a but. The relevant code is here
distributed/distributed/scheduler.py
Lines 5337 to 5390 in 4115f55