You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to setup Rundeck to leverage K8s, specifically setting up nodes that live on K8 for executions. I have a very basic linux pod running that the rundeck server see's as a node and I am able to select it as a node in a given job. The basic configurations for K8 work just fine. All of this is done using the "incluster" auth method.
Where things get weird is when I execute on the nodes.
At first I tried the plain "CMD" and "script" executors which I run into problems where these do not respect the "incluster" env var and try to look for a kubectl config file.
I was able to get around this by using the K8 plugin steps, specifically I tested with this one Kubernetes / Pods / Execute Command when I let the node detail populate the pod and namespace info. That seems to work just fine for running commands on the nodes.
So from there I started trying the Kubernetes / Pods / Execute Script as I need to execute more than just single line commands.
Thats when things went wrong...
When trying to run the script I get the following error.
[Kubernetes-InlineScript-Step] executing: [python, -u, /home/rundeck/libext/cache/kubernetes-plugin-2.0.8/pods-run-script.py, ${config.name}]
DEBUG: kubernetes-model-source: --------------------------
DEBUG: kubernetes-model-source: Pod Name: rundeck-runner-linux-randomid
DEBUG: kubernetes-model-source: Namespace: mynamespace
DEBUG: kubernetes-model-source: Container: rundeck-runner-linux
DEBUG: kubernetes-model-source: --------------------------
DEBUG: kubernetes-model-source: --------------------------
DEBUG: kubernetes-model-source: Pod Name: rundeck-runner-linux-randomid
DEBUG: kubernetes-model-source: Namespace: mynamespace
DEBUG: kubernetes-model-source: Container: rundeck-runner-linux
DEBUG: kubernetes-model-source: --------------------------
DEBUG: kubernetes-model-source: coping script from /tmp/tmp8p15amnx to /tmp/tmp8p15amnx
DEBUG: kubernetes-model-source: setting permissions ['chmod', '+x', '/tmp/tmp8p15amnx']
chmod: cannot access '/tmp/tmp8p15amnx': No such file or directory
DEBUG: kubernetes-model-source: running script ['/bin/bash', '/tmp/tmp8p15amnx']
/bin/bash: /tmp/tmp8p15amnx: No such file or directory
ERROR: kubernetes-plugin: Failed to run command
ERROR: kubernetes-plugin: Reason: NonZeroExitCode
ERROR: kubernetes-plugin: Message: command terminated with non-zero exit code: Error executing in Docker Container: 127
ERROR: kubernetes-plugin: Details: {"reason": "ExitCode", "message": "127"}
ERROR: kubernetes-model-source: error running script
[Kubernetes-InlineScript-Step]: result code: 1
Failed: NonZeroResultCode: Script result code was: 1
What I am struggling with is that there is no details about why its failing. From what I can tell something is happening during the copy step which makes the chmod and execute actions both fail because the file doesnt exist. Its also strange that even though errors happen the plugin continues to execute which makes it even harder to determine where the problem is occurring.
At this point I am a bit stuck as the plugin hides most of what is happening and since these are temp files there is nothing for me to look at on the instance.
Some final note:
The rundeck server is running as a docker container and is also deployed to K8s.
I have a service account configured (the incluster method) that has full permission just to rule out permissions errors
K8CMD works just fine which tells me permissions and connectivity arent the problem.
Any help on this issue would be much appreciated.
The text was updated successfully, but these errors were encountered:
I am trying to setup Rundeck to leverage K8s, specifically setting up nodes that live on K8 for executions. I have a very basic linux pod running that the rundeck server see's as a node and I am able to select it as a node in a given job. The basic configurations for K8 work just fine. All of this is done using the "incluster" auth method.
Where things get weird is when I execute on the nodes.
At first I tried the plain "CMD" and "script" executors which I run into problems where these do not respect the "incluster" env var and try to look for a kubectl config file.
I was able to get around this by using the K8 plugin steps, specifically I tested with this one
Kubernetes / Pods / Execute Command
when I let the node detail populate the pod and namespace info. That seems to work just fine for running commands on the nodes.So from there I started trying the
Kubernetes / Pods / Execute Script
as I need to execute more than just single line commands.Thats when things went wrong...
When trying to run the script I get the following error.
What I am struggling with is that there is no details about why its failing. From what I can tell something is happening during the copy step which makes the
chmod
andexecute
actions both fail because the file doesnt exist. Its also strange that even though errors happen the plugin continues to execute which makes it even harder to determine where the problem is occurring.At this point I am a bit stuck as the plugin hides most of what is happening and since these are temp files there is nothing for me to look at on the instance.
Some final note:
Any help on this issue would be much appreciated.
The text was updated successfully, but these errors were encountered: