Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3scale provisioning fails due missing PVs #22

Open
ikke-t opened this issue Nov 10, 2017 · 2 comments
Open

3scale provisioning fails due missing PVs #22

ikke-t opened this issue Nov 10, 2017 · 2 comments

Comments

@ikke-t
Copy link

ikke-t commented Nov 10, 2017

Hi,

I run the 3scale pv creation in part A, and went for part B, and I am left with lot of faulty pods.

$ oc create -f support/amptemplates/pv.yml
persistentvolume "pv01" created
persistentvolume "pv02" created
persistentvolume "pv03" created
persistentvolume "pv04" created

$ oc status -v :

Errors:
  * pod/backend-cron-1-jt4vh is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs backend-cron-1-jt4vh -c backend-cron
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/backend-redis-1-5xwzx is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs backend-redis-1-5xwzx -c backend-redis
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/backend-worker-1-n3f22 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs backend-worker-1-n3f22 -c backend-worker
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-mysql-1-g88h5 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-mysql-1-g88h5 -c system-mysql
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-redis-1-20gzb is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-redis-1-20gzb -c system-redis
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-sidekiq-1-x3fq9 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-sidekiq-1-x3fq9 -c system-sidekiq
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-sphinx-1-wskr8 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-sphinx-1-wskr8 -c system-sphinx
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    

Warnings:
  * pod/apicast-production-1-dfs9g has restarted within the last 10 minutes
  * pod/system-app-1-hook-pre has restarted within the last 10 minutes
  * container "system-resque" in pod/system-resque-1-g7tjn has restarted within the last 10 minutes
  * container "system-scheduler" in pod/system-resque-1-g7tjn has restarted within the last 10 minutes

Info:
  * pod/apicast-production-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/apicast-production-1-deploy --liveness ...
  * pod/backend-redis-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/backend-redis-1-deploy --liveness ...
  * pod/system-app-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-app-1-deploy --liveness ...
  * pod/system-app-1-hook-pre has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-app-1-hook-pre --liveness ...
  * pod/system-mysql-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-mysql-1-deploy --liveness ...
  * pod/system-redis-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-redis-1-deploy --liveness ...
  * dc/backend-cron has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/backend-cron --readiness ...
  * dc/backend-cron has no liveness probe to verify pods are still running.
    try: oc set probe dc/backend-cron --liveness ...
  * dc/backend-worker has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/backend-worker --readiness ...
  * dc/backend-worker has no liveness probe to verify pods are still running.
    try: oc set probe dc/backend-worker --liveness ...
  * dc/system-resque has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-resque --readiness ...
  * dc/system-resque has no liveness probe to verify pods are still running.
    try: oc set probe dc/system-resque --liveness ...
  * dc/system-sidekiq has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-sidekiq --readiness ...
  * dc/system-sidekiq has no liveness probe to verify pods are still running.
    try: oc set probe dc/system-sidekiq --liveness ...
  * dc/system-sphinx has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-sphinx --readiness ...

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

@ikke-t
Copy link
Author

ikke-t commented Nov 10, 2017

Sorry, I just noticed the cluster I work with doesn't have persistent volumes working.

@karstengresch
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants