You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dgraph version : v21.03.2
Dgraph codename : rocket-2
Dgraph SHA-256 : 00a53ef6d874e376d5a53740341be9b822ef1721a4980e6e2fcb60986b3abfbf
Commit SHA-1 : b17395d33
Commit timestamp : 2021-08-26 01:11:38 -0700
Branch : HEAD
Go version : go1.16.2
jemalloc enabled : true
Tell us a little more about your go-environment?
No response
Have you tried reproducing the issue with the latest release?
None
What is the hardware spec (RAM, CPU, OS)?
N/A
What steps will reproduce the bug?
dgraph bulk
Expected behavior and actual result.
No response
Additional information
Every week I use the command below to export the data from the production environment, and I use the dgraph bulk to import the data into the test environment
It seems we've exhausted the maximum allowed value for the uint64 type in Go, which is decimal 18,446,744,073,709,551,615 or hex 0xFFFFFFFFFFFFFFFF which is a very large value and as such, would require us to create approx. 18.5 quintillion nodes on a backend to breach that threshold and reproduce that error.
So unless that is indeed the case (your RDF indeed contains triples pertaining to 18,446,744,073,709,551,615 nodes) or that the starting UID for nodes in your RDF is already a very large value, which doesn't leave much room for any further incremental assignment for more nodes, leading to the error seen.
Can you provide more info on how UIDs are defined in the RDF; mainly the start and end UIDs please ?
Can you please run the following command on the Zero node used for the bulk-load and send us the output ? curl -s localhost:6080/state | jq | grep '"max'
If the start UID in your RDF is indeed a very large value, you may want to retry the bulk-load with --new_uids to ignore the UIDs in the RDF and freshly assign new UIDs for all nodes. Alternately, you could also replace the UIDs with blank-node identifiers (check docs) instead. Start a new Zero and re-run the bulk-loader, but with either the --new_uids flag OR using blank-node identifiers instead of hardcoded UIDs.
What version of Dgraph are you using?
Tell us a little more about your go-environment?
No response
Have you tried reproducing the issue with the latest release?
None
What is the hardware spec (RAM, CPU, OS)?
N/A
What steps will reproduce the bug?
dgraph bulk
Expected behavior and actual result.
No response
Additional information
Every week I use the command below to export the data from the production environment, and I use the dgraph bulk to import the data into the test environment
Previously, everything was normal. But this week, the execution of dgraph bulk failed, and the error log kept outputting the following
Zero's limit options are the default values, i.e. "uid-lease=0; refill-interval=30s; disable-admin-http=false; "
The text was updated successfully, but these errors were encountered: