-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reusing engines from different threads #1110
Comments
If I change the >>> def run_query(conn: sa.Connection):
... return conn.execute(sa.text("select * from Users")).fetchall()
>>> with engine.connect() as conn:
... res = run_query(conn)
>>> res
[(1, 'spongebob'), (2, 'sandy'), (3, 'patrick')]
>>> with engine.connect() as conn:
... res = await anyio.to_thread.run_sync(run_query, conn)
>>> res
[(1, 'spongebob'), (2, 'sandy'), (3, 'patrick')] |
It would be great if it were possible to pass an engine to s separate thread to use so you could use the same code irrespective of whether you were connected to a Postgres database in production or a |
Calling back into the main-thread from the worker thread seems to work, but then it only works from the worker-thread context, so, not ideal. def run_query(engine: sa.Engine):
with anyio.from_thread.run_sync(engine.connect) as conn:
return conn.execute(sa.text("select * from Users")).fetchall() |
My observation is that passing an engine connected to an in-memory
duckdb
database to a different thread doesn't work.I'm wondering if that's expected or if it would be considered a bug / missing feature?
Example:
Running the
run_query
function works as expected:...but if I run it in a background thread I get a
Catalog Error: Table with name Users does not exist!
exception 😔My assumption is that the engine loses it's connection to the in-memory database in the main thread and creates a new in-memory database where that table doesn't exist?
>>> await anyio.to_thread.run_sync(run_query, engine)
The text was updated successfully, but these errors were encountered: