Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[🐛 BUG]: panic in centrifuge plugin used by buggregator #2046

Closed
1 task done
devnev opened this issue Nov 13, 2024 · 3 comments
Closed
1 task done

[🐛 BUG]: panic in centrifuge plugin used by buggregator #2046

devnev opened this issue Nov 13, 2024 · 3 comments
Assignees
Labels
B-bug Bug: bug, exception F-need-verification

Comments

@devnev
Copy link

devnev commented Nov 13, 2024

No duplicates 🥲.

  • I have searched for a similar issue in our bug tracker and didn't find any solutions.

What happened?

Although this is a failure of a buggregator instance, it looks like its entirely in roadrunner, so I'm filing it here.

I'm running a bunch of services with a docker compose, including a spiral application, buggregator, jaeger, nginx, etc.

I've been encountering startup issues with buggregator, and I've now even gotten a panic.

The panic is:

buggregator-1   | panic: runtime error: invalid memory address or nil pointer dereference
buggregator-1   | [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0xc61bb2]
buggregator-1   |
buggregator-1   | goroutine 228 [running]:
buggregator-1   | github.com/roadrunner-server/centrifuge/v4.(*rpc).Broadcast(0xc00097db30, 0xc0008d0018?, 0xc000db0f80)
buggregator-1   | 	github.com/roadrunner-server/centrifuge/[email protected]/rpc.go:81 +0x52
buggregator-1   | reflect.Value.call({0xc0000c3440?, 0xc0001adf20?, 0x13?}, {0x1f315f7, 0x4}, {0xc000014ef8, 0x3, 0x3?})
buggregator-1   | 	reflect/value.go:596 +0xce7
buggregator-1   | reflect.Value.Call({0xc0000c3440?, 0xc0001adf20?, 0xc0009c02b0?}, {0xc000a4c6f8?, 0x411845?, 0x10?})
buggregator-1   | 	reflect/value.go:380 +0xb9
buggregator-1   | net/rpc.(*service).call(0xc0000509c0, 0xc0009c0270?, 0xc000db0280?, 0xc000da81a0, 0xc000793a00, 0xc000a4c7d0?, {0x1e015e0?, 0xc0007e7180?, 0x24b7cc0?}, {0x1d96be0, ...}, ...)
buggregator-1   | 	net/rpc/server.go:382 +0x214
buggregator-1   | created by net/rpc.(*Server).ServeCodec in goroutine 227
buggregator-1   | 	net/rpc/server.go:479 +0x410
buggregator-1 exited with code 2

Before this there were repeated logs of

buggregator-1   | 2024-11-13T14:20:02+0000	ERROR	service.centrifuge	wait	{"error": "exit status 1"}

Version (rr --version)

The buggregator docker image is

ghcr.io/buggregator/server:1.11.3

The dockerfile at that commit has ARG ROAD_RUNNER_IMAGE=2023.3.7 so I presume the version of roadrunner is

2023.3.7

How to reproduce the issue?

I can reproduce the ERROR service.centrifuge wait {"error": "exit status 1"} when I simulate heavy CPU contention (as happens during the start of services with docker compose) with the --cpus flag, e.g. docker run --cpus=0.1 ghcr.io/buggregator/server:1.11.3

So far I have not been able to reproduce the panic.

Relevant log output

No response

@rustatian
Copy link
Member

Hey @devnev 👋
Could you please update to v2024.2, because v2023.3.7 is hugely outdated.

@devnev
Copy link
Author

devnev commented Nov 13, 2024

@rustatian OK, I've raised an upstream PR: buggregator/server#261

@rustatian
Copy link
Member

Have you tried the latest version? Are there any problems with it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
B-bug Bug: bug, exception F-need-verification
Projects
None yet
Development

No branches or pull requests

2 participants