-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Routine Load for Iceberg tables #49956
Comments
It'd be better to be implemented. And, can you share with me about your scenarios? Do you just want datalake analytics, and the query performance is not so much restricted to second level? |
Yes you need to not commit excessively. I think around 1min-5min intervals are reasonable (that's often used with Flink/Iceberg as the checkpoint interval). Second level is not necessary and would generate too many files with iceberg. In general, data/metadata gets compacted away so a few new files don't really affect performance too much as long as commit interval is reasonable. |
We have marked this issue as stale because it has been inactive for 6 months. If this issue is still relevant, removing the stale label or adding a comment will keep it active. Otherwise, we'll close it in 10 days to keep the issue queue tidy. Thank you for your contribution to StarRocks! |
Hi, |
This was never implemented, this is just an issue to track |
Feature request
Support routine load to load data to iceberg tables.
Is your feature request related to a problem? Please describe.
Use starrocks directly to write to starrocks from kafka without having to use kafka connect or separate write fleet.
Describe the solution you'd like
Reuse routine load infra, adapt for Iceberg tables.
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: