You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is what I observe when running the post_hook (int this case OPTIMIZE) in a Python model within the workflow_job.
To create the workflow, you need to run your model using dbt run --select .... In my case, this is executed with a SQL warehouse. This triggers the creation of a workflow, which then starts a job run. The job cluster is created and begins executing the model logic, essentially running the Python code and writing the results to the target table. Once the job run is complete, it finishes with a "success" status.
However, I notice that the dbt run process has not yet finished because there are still ongoing operations in the SQL warehouse. In fact, I sometimes see the post_hook running within the SQL warehouse, suggesting that the post_hook is not actually part of the workflow job but is instead executed separately on the SQL warehouse. Additionally, the post_hook rarely runs.
So its look like it is inconsistent, and you can't rely on it actually running, given that the vast majority of the time, it doesn't.
The text was updated successfully, but these errors were encountered:
Describe the bug
When running python model with workflow_job post hook not running
Steps To Reproduce
Running python model with workflow_job with post hook
Expected behavior
After running the model and writing the data to the target table I excpected to the post_hook to run in this case optimize command
Screenshots and log output
You can see that not optimize command ran in the table history which is the post_hook command
System information
The output of
dbt --version
:The operating system you're using:
Linux
The output of
python --version
:3.11.2
Additional context
This is what I observe when running the post_hook (int this case OPTIMIZE) in a Python model within the workflow_job.
To create the workflow, you need to run your model using dbt run --select .... In my case, this is executed with a SQL warehouse. This triggers the creation of a workflow, which then starts a job run. The job cluster is created and begins executing the model logic, essentially running the Python code and writing the results to the target table. Once the job run is complete, it finishes with a "success" status.
However, I notice that the dbt run process has not yet finished because there are still ongoing operations in the SQL warehouse. In fact, I sometimes see the post_hook running within the SQL warehouse, suggesting that the post_hook is not actually part of the workflow job but is instead executed separately on the SQL warehouse. Additionally, the post_hook rarely runs.
So its look like it is inconsistent, and you can't rely on it actually running, given that the vast majority of the time, it doesn't.
The text was updated successfully, but these errors were encountered: