Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regression for persisting events #585

Open
johanandren opened this issue Aug 26, 2021 · 5 comments
Open

Performance regression for persisting events #585

johanandren opened this issue Aug 26, 2021 · 5 comments
Assignees

Comments

@johanandren
Copy link
Member

Noticed with the, somewhat synthetic, JournalPerfSpec from the persistence tck, compared to 4.0.0 5.0.0 is an order of magnitude slower for persistAsync.

I have only verified the results against a MySQL instance so far, so it could somehow be specific or it could be general to all DB-flavours.

@johanandren
Copy link
Member Author

I have verified that the slowdown of persistAsync is the same for Postgres.

@johanandren
Copy link
Member Author

I have pinpointed the regression to the schema change commit (0d2f1ee) but I have yet to figure out why.

Looking at individual batches (in mapAsync of the writeQueue stream) written during the perf spec (with Postgres) I see that the writes pile up to roughly the same sized batches of max 400 rows on my machine, however before 0d2f1ee that never takes more than 40 ms, while with the change they pile up and take up to 500 ms.

@patriknw
Copy link
Member

That test isn't using tags, right? so it's not even writing to the new event_tag table?

@johanandren
Copy link
Member Author

I think it'd make sense to look into if writing with tags is a regression compared to 4.0 as well and investigate if we can do something about that in case it is much slower.

@johanandren johanandren changed the title Performance regression for persistAsync Performance regression for persisting events Aug 30, 2021
@Roiocam
Copy link
Contributor

Roiocam commented Nov 22, 2022

@johanandren Hi, I just found a performance issue #710. And I want to extend some topic to this.

I am not sure if those performance tests are running on a single machine or not, but the benchmarks sometime would lie to us.

When the performance benchmark runs on a single machine, the latency is intended and purposes 0. (Where the benchmark lies to us).

Some kind of bottleneck like #710 was hidden on the zero latency.

In my opinion, the performance test should add some latency. In the real world, the network was not reliable.

I hope it will help with this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants