-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: assertion failed: (*header).pd_special >= SizeOfPageHeaderData as u16 #193
Comments
I also checked the page size:
|
Ouch, page corruption. @Eli-Airis would you be willing to share the workload you're running that triggers this? Likely me or @syvb will need to try to repro this ourselves to make progress. (I can share contact info if you're not comfortable posting it somewhere publicly accessible). |
I'll happily share, pending approval on Sunday morning.
|
Hi @tjgreen42 and @smoya, I have approval for sharing the workload. Could you please provide the email address that I should give access to? The workload isn't sensitive but we would still prefer you don't share it outside of Timescale. |
Looks like the regression was introduced between versions 2.17.1 and 2.17.2: I COULD NOT reproduce the assertion error with the docker image tagged Note that the issue is reproduced with version 2.17.2 both on pg16 and on pg17. |
Hi @Eli-Airis, my work email address is [email protected]. Thanks much for being willing to share the workload. |
Great! I shared the payload with you. |
What happened?
I am using the official docker image unchanged:
and I'm frequently getting the following assertion error when (multi-)inserting into a table with a diskann index:
assertion failed: (*header).pd_special >= SizeOfPageHeaderData as u16
.My loaded extensions:
I did not set any option explicitly when creating the index:
I only changed the following parameters in my configuration based on https://pgtune.leopard.in.ua/?dbVersion=17&osType=linux&dbType=dw&cpuNum=32&totalMemory=32&totalMemoryUnit=GB&connectionNum=&hdType=ssd:
I have also tried running without any change to the default postgresql.conf, and still got the same assertion error.
SHOW blocksize
returns 8192I would appreciate any help resolving this.
pgvectorscale extension affected
0.5.1
PostgreSQL version used
17.2
What operating system did you use?
Linux my-db-machine 6.8.0-1020-gcp #22-Ubuntu SMP Mon Dec 9 17:09:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
What installation method did you use?
Docker
What platform did you run on?
Google Cloud Platform (GCP)
Relevant log output and stack trace
How can we reproduce the bug?
The only unusual thing about my vector field is its size isn't a power of 2, it's 513. I run a multi-insert from Sqlalchemy of batches of size 1000.
Are you going to work on the bugfix?
None
The text was updated successfully, but these errors were encountered: