-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeconfig generation CSR failure #754
Comments
Hi @danieljkemp, Thanks for trying out |
I did, and I got the bootstrap config from the statue field as described.
…On Wed, Dec 7, 2022 at 4:20 AM Dharmjit Singh ***@***.***> wrote:
Hi @danieljkemp <https://github.com/danieljkemp>, Thanks for trying out
BYOH, This seems like an RBAC issue. Did you follow the steps in the
getting started guide to create the bootstrap kubeconfig[here
<https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/blob/main/docs/getting_started.md#generating-the-bootstrap-kubeconfig-file>]
for the initial one-time use in the host? This provides a bootstrap token
kubeconfig with the required permissions to create CSR.
—
Reply to this email directly, view it on GitHub
<#754 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACON7OISNIBRB2ROT2CLUV3WMBJEDANCNFSM6AAAAAASUW3L4I>
.
You are receiving this because you were mentioned.Message ID:
<vmware-tanzu/cluster-api-provider-bringyourownhost/issues/754/1340635611@
github.com>
|
Same error on k8s 1.25.4 bootstrap cluster. Has it something to do with service accounts missing secrets, thus kubeconfig being not valid anymore? I think this happens since 1.24+ |
Same error on k8s 1.23.5 bootstrap cluster unfortunately. |
@danieljkemp Finally registeredkubectl get byoh -A
NAMESPACE NAME OSNAME OSIMAGE ARCH
default tanzu-master-0 linux Ubuntu 20.04.5 LTS amd64 |
I had to install iptables on the master and worker nodes too and now my cluster is up and running! |
Well, this will beat the purpose of having a |
@anusha94 export LOGIN_USER=bootstrapuser
kubectl -n kube-system create serviceaccount $LOGIN_USER
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: $LOGIN_USER
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "$LOGIN_USER"
EOF
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: $LOGIN_USER
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: $LOGIN_USER
namespace: kube-system
EOF
kubectl -n kube-system get secret -o yaml $LOGIN_USER
export USER_TOKEN_NAME=$(kubectl -n kube-system get secret $LOGIN_USER -o=jsonpath='{.metadata.name}')
export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/${USER_TOKEN_NAME} -o=go-template='{{.data.token}}' | base64 --decode)
export CURRENT_CONTEXT=$(kubectl config current-context)
export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')
export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}')
export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
cat << EOF > $LOGIN_USER-config
apiVersion: v1
kind: Config
current-context: ${CURRENT_CONTEXT}
contexts:
- name: ${CURRENT_CONTEXT}
context:
cluster: ${CURRENT_CONTEXT}
user: $LOGIN_USER
namespace: kube-system
clusters:
- name: ${CURRENT_CONTEXT}
cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_SERVER}
users:
- name: $LOGIN_USER
user:
token: ${USER_TOKEN_VALUE}
EOF
kubectl --kubeconfig $(pwd)/$LOGIN_USER-config get all --all-namespaces
```sh |
same issue here |
hit the same issue with k8s 1.27.2 with |
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
WHen running the BYOH agent on the new node, I am getting the following error
What did you expect to happen:
No errors, and the node visible in
kubectl get byohosts
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version --short
): 1.24/etc/os-release
): Ubuntu 20.04.5 LTS (Focal Fossa)The text was updated successfully, but these errors were encountered: