You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since in particular dask-ctl seems to be a bit more mature, it would be good to see if we can rewrite this package to use dask-ctl underneath, and to upstream any features that are missing.
This would then become a collection of hpc configuration files, and maybe a thin wrapper that would translate names to paths and feed that to dask_ctl.lifecycle.create_cluster.
The text was updated successfully, but these errors were encountered:
looking at dask/dask-jobqueue#544 it seems we took the easy way out: we print the scheduler address to stdout or into a file, and we don't really care if the scheduler is on the current machine (because in the most common workflow we're usually on a compute node anyways)
cc @jacobtomlinson for reference, but I will open a new issue on dask-ctl to see what the best way forward would be.
Sounds great, let me know how I can help! For reference dask-jobqueue currently does not fully support dask-ctl for the reasons you mention about the scheduler being a subprocess of the Python code that created the cluster. But for cluster creation from a spec file you should be all good!
There are at least a few related packages that have significant overlap:
Since in particular
dask-ctl
seems to be a bit more mature, it would be good to see if we can rewrite this package to usedask-ctl
underneath, and to upstream any features that are missing.This would then become a collection of hpc configuration files, and maybe a thin wrapper that would translate names to paths and feed that to
dask_ctl.lifecycle.create_cluster
.The text was updated successfully, but these errors were encountered: