-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaling down is failing with WildFly 18 S2I #111
Comments
@ochaloup fiy, this can be reproduced and I have not the same issue with WildFly 17 S2I. |
@jmesnil I'm trying to reproduce what you observe. I didn't follow your exact setup as I use the codeready and From the log I can see that the failures happens on socket dial on the I will continue in investigation the next day where I'll try to run the exact branch and the minikube. |
@jmesnil after some struggle I reproduced the issue and the trouble is that the operator runs on a different network from where the pods run. The operator needs to connect directly to the pod and call to the socket. This is not possible as the minikube runs in the virtual machine and the operator runs on the This issue should be closed. Unfortunately currently it's not possible to run operaror locally and process scaledown. |
why is this issue not happening with WildFly 17 S2I? |
@jmesnil because the WFLY17 S2I does not define the |
@ochaloup This is another good reason to provide a proper management operator for recovery scan.... |
I do agree and I plan to work on the issue JBEAP-17611 soon ;-) |
ok so that means that I'll comment the scale down test for WildFly 18 S2I until it is possible for the operator to issue a recovery scan in WildFly using a management operation (targeting WildFly 19 then) |
@jmesnil I don't think it's a good idea. The e2e test should be still working. What does not work is running the operator locally on localhost and the rest as part of minikube. If the operator and the pods are at the same network (as it's usual openshift/kubernetes deployment) then all works fine. I would really be happy if we can have the scale down test enabled. |
Steps to reproduce
make run-local-operator
replicas
to1
The Operator will start recovery but error appears and the pod
quickstart-1
is not terminated:The text was updated successfully, but these errors were encountered: