Replies: 1 comment
-
This page has guides for the various ways the open source version of Dagster can be deployed. It may be helpful to look at the Hybrid deployment architecture from Dagter+ (paid service). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
first, a bit of a background. I'm currently using Prefect 2, where I have one machine that runs their server (aka Orion), and multiple machines, each of then running one or more systemd services, that run prefect-agent installed in a conda session. Each agent is configured to join queue based on the name of the conda environment. Me and several of my teammates use this setup, without any code overlap. I have a repository of code, where each prefect scheduled deployments (ie workflow) is set to use one of the queues/conda environments when executed. To deploy code, I simply do
python deployment/name.py
and prefect copies the checkout to minio, where the agents pick the code from, when execution time kicks in.So far, this setup works quite well, but I realilzed I deal with assets, not workflows. One prefect deployment would produce a dataset that I store in hdfs, which I could reuse in another deployment, but there is no simple triggering. Thus, I rely on scheduling dependent deployments with sufficient buffer time. That is tedious and does not scale nicely.
I heard nice things about dagster and would like to migrate to it, but after reading the documentation, I'm still not sure what I need to do to replicate the above setup. Searching didn't yield much either (probably I'm not using the right terms, since I'm coming from a different product). Does anyone run a similar setup, and if so, could you give me pointers?
I found a discussion about workspaces and K8s, but I really don't want to setup k8s cluster. Are there any other solutions?
Beta Was this translation helpful? Give feedback.
All reactions