Skip to content
This repository has been archived by the owner on Apr 9, 2022. It is now read-only.

Version 1.4.0 #47

Merged
merged 15 commits into from
Feb 26, 2020
Merged

Version 1.4.0 #47

merged 15 commits into from
Feb 26, 2020

Conversation

verdie-g
Copy link
Contributor

No description provided.

@verdie-g verdie-g requested a review from ychuzevi February 24, 2020 13:08
@verdie-g verdie-g mentioned this pull request Feb 24, 2020
@ParthShirolawala
Copy link

ParthShirolawala commented Feb 25, 2020

Can we have the release as soon as possible please ?I am facing a lot of a problem because of issue #46. Is there any workaround I can do If release is not anytime soon ? If you guys can suggest something.

@ychuzevi @verdie-g

Jules Bovet and others added 15 commits February 26, 2020 10:20
Currently, there is only one partition selection mode, which is
the round robin one.

There's a need to introduce a new selection strategy depending on
the keys of messages to determine the partition the messages should
go to. It would allow to compact the same messages and reduce heavily
the number of messages.

This new partition selection strategy guarantees that messages with
the same key will be sent to the same partition, given the same
number of available partitions. It is not using a consistent hashing
though, which means that if new nodes are introduced or old ones are
removed from the cluster, new messages with the same key as old
messages will not be sent to the same partition.

JIRA: WBSC-3909

Change-Id: Ie75b40123fd90550aecafd9c33d2b82952470469
For client of the driver (i.e. the Kafka service) it is useful
to be able to create messages for testing purpose, and to be
able to manually set their partition/topic etc.

Change-Id: Ife4b6e29e0ea4e732f58e9500f085ca06a390312
We currently have an issue with kibana where for some messages
the labels are not correctly parsed, and they are considered part
of the message. This messes up with getting logs easily.

I am trying to change this message to see if it fixes the issue.

Change-Id: I45a24df49a1d564a4c8a8dd0453d1fd72a5cc237
Change-Id: I7a054467983c93ee307c8b6410e15c4fd03bd964
- when SerializeOnProduce is on the SerializeKeyValue is now done
  correctly for records
- when SerializeOnProduce is off but the object is not sized,
  serialization is still done on produce

Change-Id: I9aa42448f9914c0b3eab95c806772981d965b1e4
Change-Id: Ia5e0975778586c7c0b46583a0fb24b81295e9fa8
…...}

ReadInt32 implies there is an Int32 in the stream but it actually
reads a varint and stores it in an Int32

Change-Id: I375566a8173b1a74ea4b9906a14a9ccf29a53d21
This change is to replace this pattern

var recordBatch = new RecordBatch();
recordBatch.Deserialize(stream, deserializers, endOfAllBatches);

with

var recordBatch = RecordBatch.Deserialize(stream, deserializers, endOfAllBatches);

The big diff is because I moved static method at the top of the class

Change-Id: I381a0145bc147495fa4145fd17e99d8bf14968b5
Change-Id: I84fd77041218e4dd9fb1acadc6dd406103f982b5
Change-Id: I43d6650ee70bbff8661b6406ab5f4086ca0381f2
From the official documentation: "As an optimization the server is allowed
to return a partial message at the end of the message set. Clients should
handle this case.".

We then discard the message, the read offset not being updated for this message,
meaning we will get it in full in the next FetchRequest.

Change-Id: I0dae2e93641c7a48544d5a3b02287bd10319ab58
Currently, when trying to start consuming a topic, if the first fetch of
the available partitions fails (because the topic does not exist for example),
the driver will immediately retry. This can translate to hundreds of requests
per second, which is useless.

As this loop in HandleStart is already blocking indefinitely the ProcessMessage()
loop of the driver, introducing a retry delay does not change the behavior.

Change-Id: Ice89ab8e4cb1a41ba25fb2bec49bd1be66001a8b
- Add a log when a broker is removed from the nodes list because of a
  topology change
- Remove unknown node (when dead) from the routing table so it is not
  reused, which was not the case before

Change-Id: I32cb7fe19f9935b0abe088950a07f1590b7a2951
So that logs like "...reaching TTL for [topic: toto / partition: -3]"
are replaced with "...reaching TTL for [topic: toto / partition: any]"

Change-Id: I95f2a8400abd66ebc82aaf0336eeb4f29f39888e
Change-Id: Ibf22b2e88b79ffae73f2cfcd08914438e96ebe7d
@ychuzevi ychuzevi merged commit db9416b into criteo:master Feb 26, 2020
@ParthShirolawala
Copy link

@ychuzevi Nuget package for version 1.4 is available to use? or latest change will be reflected in present version 1.3 only ?

@ychuzevi
Copy link
Contributor

Good question. It should have been published automatically thanks to our appveyor plugin. Not sure why the 1.4.0 nuget is not yet available.

@ParthShirolawala
Copy link

@ychuzevi
Is there a way to manually publish the new version for nuget?

@ychuzevi
Copy link
Contributor

Hello @ParthShirolawala
Yes, the nuget is now available.

@verdie-g verdie-g deleted the criteo/syncFebruary2020 branch February 27, 2020 11:51
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants