-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka integration #392
Comments
/bounty $500 |
thank you @varshith257! |
I had a look at the producer side of Kafka (curiosity, not committing to picking up this work). There are some earlier building blocks in kyo that I have some questions about first. Stream chunkingProducing / consuming from Kafka and streaming often go together. Currently the streaming implementation hides chunking. For example, the
Note the chunk is used to avoid the overhead of individual elements through the Stream, but it is hidden as an implementation detail. Producing to Kafka can be expensive for effect system if it is modelled per record. We can achieve better performance producing to Kafka if the produce operation is expressed on a How do folks feel about changing the Streams implementation so that chunking is exposed externally? Or not even a concern of the Modelling completion of publishing to Kafka.Typically publication to Kafka is modelled as an
Now in kyo, we can't represent the two side effects with Making blocking callsAre there any concerns in kyo with a call to |
I think I've answered by own question about the blocking calls in kyo. The blocking benchmark seems to indicate that it is acceptable in kyo to just use
|
With regards to modelling the producing to kafka as
Or at least that is what I think is happening when I gave this a go. When I try to start using that I get an error: I can work around this by using |
@varshith257 I'm redistributing Kyo's bounties. Have you been able to work on this? If not, can you cancel the attempt? Thank you! |
An additional feature that would be great to have is to have a Consume Chunk API where users would not be exposed to stream of consumer records. Instead the API should allow users to submit an effectful function that works on a chunk of consumer records belonging to the same Kafka partition. The effectful function would be evaluated per partition concurrently. This is inspired by FS2 Kafka’s Consume Chunk API |
Removing bounty to redistribute the budget |
Many microservices rely on Apache Kafka to send and receive data asynchronously in a more fault tolerant manner. It is also notoriously difficult to build high level Kafka consumers and producers accounting for concerns like backpressure, rebalancing, streaming, etc.
It would be a very nice proposition to tick this box and give users one more reason to select Kyo for their next project
The text was updated successfully, but these errors were encountered: