-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dashboards end up as empty files on disk #200
Comments
Note that it worked for me w/ chart version config excerpt: values:
- dashboards:
default:
aws-billing:
# Ref: https://grafana.com/dashboards/139
gnetId: 139
revision: 15
datasource: CloudWatch
redis:
# Ref: https://grafana.com/dashboards/969
gnetId: 969
revision: 3
datasource: CloudWatch
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
folder: ''
options:
path: /var/lib/grafana/dashboards/default |
Grafana chart 6.3.0 seemed to resolve this issue for me. I deleted grafana + pvc and started from scratch, and everything just started working again. Dashboards got provisioned properly. 👍 |
I'm having this problem, even with chart 6.3.0, as far as I can see chart 6.3.0 doesn't change anything regarding dashboards or Persistent Volumes only bumps grafana image version. I'm using helm 3.5.0 and Kubernetes 1.19. |
same here, destination file is empy |
I'm having the same issue. zero byte files and the error=EOF. I'm using grafana version 6.16.5 |
Actually resolved the issue. The dashboard json files could not be found during the helm install so it created zero byte files. As soon as I rectified this, the correct files appeared in the right place. |
Related (possibly duplicate) issues include #764 and #27 . The dashboard curl init container silently drops errors with the For me, I noticed that Instead of: dashboards:
ceph:
ceph-cluster:
gnetId: 2842
revision: 14
datasource: Prometheus Try using a url, notice the extra dashboards:
ceph:
ceph-cluster:
url: https://grafana.com./api/dashboards/2842/revisions/14/download
datasource: Prometheus The url format is For reference, check the source for helm-charts/charts/grafana/templates/configmap.yaml Lines 52 to 82 in 589022e
Don't forget, you'll also need the properties of dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'ceph'
folder: 'Ceph'
orgId: 1
type: file
disableDeletion: true
allowUiUpdates: false
options:
path: /var/lib/grafana/dashboards/ceph
- name: 'nginx'
folder: 'NginX'
orgId: 1
type: file
disableDeletion: true
allowUiUpdates: false
options:
path: /var/lib/grafana/dashboards/nginx
# dashboards per provider, use `dashboardsProvider.*.providers[].name` as key.
dashboards:
ceph:
ceph-cluster:
url: https://grafana.com./api/dashboards/2842/revisions/14/download
datasource: Prometheus
nginx:
nginx-ingress:
url: https://grafana.com./api/dashboards/9614/revisions/1/download
datasource: Prometheus |
Hi please can you elaborate on what you did exactly to fix this? How did you rectify the dashboard files so they could be found? |
Hey, did someone solve this? |
Chart: 6.2.1
Basically the issue is the same as here:
helm/charts#22464
Files do appear in the volume but are 0 bytes.
I can write into them myself and read it back, so the volume itself is OK.
I can import these dashboards via grafana. It is only the provisioning that fails.
None of the other pods contain any errors regarding failed downloads.
The text was updated successfully, but these errors were encountered: