Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to mount volumes for pod logging/elasticsearch #5

Open
gsaslis opened this issue Jul 31, 2017 · 6 comments
Open

Unable to mount volumes for pod logging/elasticsearch #5

gsaslis opened this issue Jul 31, 2017 · 6 comments

Comments

@gsaslis
Copy link

gsaslis commented Jul 31, 2017

Hey there,

Thanks for putting all this together!! Was exactly what I was looking for!

Originally, I used kubespray's (btw, you may want to "replace all" here, after the recent rename) efk_enabled flag, just as you suggest in section 2 in the readme. That worked fine, but:
a. I had an issue with the KIBANA_BASE_URL which I probably need to raise there,
b. they're still using the old 2.4.x versions of ES/Kibana

so, i wanted to give your kubectl apply -f logging approach a go, but I ran into an issue with the PVC you have there.

Here's the error message:

Unable to mount volumes for pod "elasticsearch-1832401789-f41vb_logging(5cddff81-75fa-11e7-ba5a-0019994e86b3)": timeout expired waiting for volumes to attach/mount for pod "logging"/"elasticsearch-1832401789-f41vb". list of unattached/unmounted volumes=[es-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "logging"/"elasticsearch-1832401789-f41vb". list of unattached/unmounted volumes=[es-data]

Was I supposed to have set up some dynamic volume provisioning for this to work?

@gregbkr
Copy link
Owner

gregbkr commented Jul 31, 2017

Hi @gsaslis, you welcome!
Seems like your claim for volume didn't work out.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: es-pv-claim
  labels:
    app: elasticsearch
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

Good luck!

@Techs-Y
Copy link

Techs-Y commented Aug 22, 2017

The same issue. There is no any PV for PVc created in playbooks

@selvik
Copy link

selvik commented Dec 6, 2017

@gsaslis @Techs-Y Did the tip above help fix the PV/PVC issue for you?

@gsaslis
Copy link
Author

gsaslis commented Jan 30, 2018

@selvik i think my problem back then was that i didn't have dynamic provisioning set up, so i ended up having to manually add the StorageClass myself.

@gregbkr do you think it would make sense adding an example like this to your repo?

@gregbkr
Copy link
Owner

gregbkr commented Jan 30, 2018

@gsaslis : sure, please make a pull request with the documentation addition, I will merge it. I don't have the environment to test at the moment, sorry if I couldn't help much.
Thank you for your help!

@sahil-sharma
Copy link

sahil-sharma commented Feb 16, 2018

Hello,
I too ran into the same issue (volume failed to mount) but as you suggested I commented out the volume part from elasticsearch-deployment.yaml file.
# kubectl apply -f logging
worked fine. Got access to Kibana dashbaord and ES:30200
From Kibana dashbaord I am unable to do this (as you suggested):
Check logs coming in kibana, you just need to refresh, select Time-field name : @timestamps + create
What if my cluster in on cloud then how would I load this file (management > Saved Object > Import > logging/dashboards/elk-v1.json)?
Any hints on this.
I moved on to next step: Monitoring
First thing, there are two folders in your repo with name monitoring and monitoring2. What's the difference?
When I ran this command:
# kubectl apply -f monitoring
I got an error related to node-exporter image. You were using an image which is not available.
I updated it to image: node-exporter:v0.15.2 and it worked.
When I try to access grafana page but no logs and more surprise to me is: fluentd pods are not running and throwing an error CrashLoopBackOff.
fluentd_failing
On kubectl describe on fluentd pod I got a message:
fluentd_describe
ERROR: Back-off restarting failed container
Don't know what it is happening.
Can one suggest?
Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants