-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SQL: loses connection after timeout #17178
Comments
Here's some code which shows this with just a timeout and no server needed. It seems like it's 60s+ not 30s that causes the issue: import { sql } from 'bun'
let timeout = 65
let count = 0
let query = async () => {
console.log(`Trying ${++count}...`)
await sql`SELECT 'hello world'`
console.log('worked.')
}
await query()
console.log(`Trying again in ${timeout}s`)
setTimeout(query, timeout * 1_000) Logs this:
... and never gets further. |
@jakeg can you try canary as I believe this has been fixed. bun upgrade --canary |
Just upgraded ( |
The error: [Bun.serve]: request timed out after 10 seconds. Pass `idleTimeout` to configure. In this case SQL connection should reconnect if an const sql = new SQL({
host: "localhost",
port: 5432,
user: "bun",
password: "bunbunbun",
database: "bun",
idleTimeout: 120,
});
const server = Bun.serve({
idleTimeout: 120, // in seconds, 0 to disable but be careful
async fetch(req) {
if (req.url.endsWith("/timeout")) {
await sql`SELECT pg_sleep(30)`; // will take 30 seconds to complete
} else {
await sql`SELECT 1`;
}
return new Response("hi");
},
});
await fetch(server.url + "/timeout")
.then(console.info)
.catch(console.error);
await fetch(server.url).then(console.info).catch(console.error); or you can also use const server = Bun.serve({
async fetch(req) {
if (req.url.endsWith("/timeout")) {
server.timeout(req, 120); // long query expected
await sql`SELECT pg_sleep(30)`; // will take 30 seconds to complete
} else {
await sql`SELECT 1`;
}
return new Response("hi");
},
}); using the code I showed but without setting idleTimeout we get the output: error: The socket connection was closed unexpectedly. For more information, pass `verbose: true` in the second argument to fetch()
path: "http://localhost:3000//timeout",
errno: 0,
code: "ConnectionClosed"
Response (2 bytes) {
ok: true,
url: "http://localhost:3000/",
status: 200,
statusText: "OK",
headers: Headers {
"content-type": "text/plain;charset=utf-8",
"date": "Tue, 11 Feb 2025 19:39:31 GMT",
"content-length": "2",
},
redirected: false,
bodyUsed: false,
Blob (2 bytes)
} Setting the proper timeout expecting long queries: Response (2 bytes) {
ok: true,
url: "http://localhost:3000//timeout",
status: 200,
statusText: "OK",
headers: Headers {
"content-type": "text/plain;charset=utf-8",
"date": "Tue, 11 Feb 2025 19:40:41 GMT",
"content-length": "2",
},
redirected: false,
bodyUsed: false,
Blob (2 bytes)
}
Response (2 bytes) {
ok: true,
url: "http://localhost:3000/",
status: 200,
statusText: "OK",
headers: Headers {
"content-type": "text/plain;charset=utf-8",
"date": "Tue, 11 Feb 2025 19:40:41 GMT",
"content-length": "2",
},
redirected: false,
bodyUsed: false,
Blob (2 bytes)
} In this test even after a connection closed I was able to run another query just fine in latest canary revision Update: const sql = new SQL({
host: "localhost",
port: 5432,
user: "bun",
password: "bunbunbun",
database: "bun",
// lets guarantee that we only have 1 connection available for this test
max: 1,
// lets make the connection actually close after 30s and wait more 35s after this
maxLifetime: 30,
idleTimeout: 30,
});
let timeout = 65;
let count = 0;
let query = async () => {
console.log(`Trying ${++count}...`);
await sql`SELECT 'hello world'`;
console.log("worked.");
};
await query();
console.log(`Trying again in ${timeout}s`);
setTimeout(query, timeout * 1_000); Trying 1...
worked.
Trying again in 65s
Trying 2...
worked. @jakeg can you check if you can still reproduce this bug? |
@cirospaciari sorry, my original bug report should have never included the I'm using Supabase PostgreSQL here with their default settings. I'm still getting the same bug (trying on canary import { sql } from 'bun'
let timeout = 65
let count = 0
let query = async () => {
console.log(`Trying ${++count}...`)
await sql`SELECT 'hello world'`
console.log('worked.')
}
await query()
console.log(`Trying again in ${timeout}s`)
setTimeout(query, timeout * 1_000) And the output:
... at which point it just hangs still, seemingly forever. I don't have a local PostgreSQL database I can try against, just the Supabase one. Maybe its based on some sort of Supabase default? But again, I would expect Bun to be reconnecting or something, not just hanging. I just tried changing to this instead: import { SQL } from 'bun'
let sql = new SQL({
url: process.env.POSTGRES_URL,
maxLifetime: 0
})
// ... ... and still hanging. This, however, works without hanging: let sql = new SQL({
url: process.env.POSTGRES_URL,
idleTimeout: 30
}) Is this expected behaviour? I'm guessing not. |
Is not the expected behavior, really weird that you are facing this, will check using supabase, I cannot replicate this error with a local database with is weird, thank you for all information will investigate further. Some notes: By default My guess without setting maxLifeTime or idleTimeout the server connection with @jakeg can you say if you are using Session pooler, Transaction pooler or Direct connection from supabase? |
Can confirm that on Canary ( |
What version of Bun is running?
1.2.2+c1708ea6a
What platform is your computer?
Linux 5.15.167.4-microsoft-standard-WSL2 x86_64 x86_64
What steps can reproduce the bug?
Sample code:
Using a
.env
file with aPOSTGRES_URL
that connects to either Supabase's "Session pooler" or their "Transaction pooler".Works fine until you don't do any requests (and thus SQL queries) for ~30+ seconds, then next time you try a request I presume the pool of connections have been closed due to an idle timeout, but no attempt is made to reconnect. The request hangs then this error appears:
Am I supposed to do something manually to create new connections in the pool? I assumed that would be automatic.
Only way to fix is to restart the server, then of course it will happen again if no queries for 30+ secondds.
What is the expected behavior?
No response
What do you see instead?
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: