Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform apply logs #163

Closed
dormullor opened this issue May 25, 2023 · 11 comments · Fixed by #258 · May be fixed by #248
Closed

Terraform apply logs #163

dormullor opened this issue May 25, 2023 · 11 comments · Fixed by #258 · May be fixed by #248
Labels
enhancement New feature or request needs:triage

Comments

@dormullor
Copy link

What problem are you facing?

It would be helpful to see the terraform apply logs to get the progress for the Workspace

How could Official Terraform Provider help solve your problem?

Sometimes, terraform apply can take a while ( 10 - 20 minutes ), during this time, it is hard to tell whats the status of the terraform apply command.

It would be great to output the terraform apply logs to the user to get more visibility.

@dormullor dormullor added enhancement New feature or request needs:triage labels May 25, 2023
@bobh66
Copy link
Collaborator

bobh66 commented Nov 8, 2023

I think this would require something like a sidecar container that could attach to the process output and stream the logs "somewhere".

@suramasamy
Copy link
Contributor

suramasamy commented Feb 8, 2024

@bobh66 @ytsarev I hope this message finds you well. We have observed that this particular issue has garnered significant interest and upvotes.

Presently, debugging Terraform issues is challenging as the provider-terraform exposes only brief errors. It would be beneficial to have the option to write CLI logs to a file if necessary.

We tried the sidecar container approach but since the CLI logs are not stored anywhere, it is difficult to read them from the sidecar container. There is a possibility that we can use STRACE to intercept the terraform process but it requires escalated privileges and may have performance implications.

We propose exposing a field called "logPath" in the provider configuration or workspace. If a value is provided for this field, all Terraform plan (if changes exist), apply, and destroy CLI logs can be directed to that file. We can probably use a DeploymentRuntimeConfig to attach a volume to the pod, enabling the writing of the file to persistent storage like EBS.

We are keen to contribute to the implementation of this feature. Please inform us if we may proceed with the mentioned approach or if you prefer alternative solutions.

Thank you.

@ytsarev
Copy link
Member

ytsarev commented Feb 14, 2024

@suramasamy I think a local log file is a good start to tackle this long-term issue 👍 Thank you so much for your willingness to contribute the solution, I am keen to review the PR!

@bobh66
Copy link
Collaborator

bobh66 commented Feb 14, 2024

One concern when generating files is cleanup - would you expect that the file(s) would be deleted automatically when the Workspace is deleted?

@suramasamy
Copy link
Contributor

Thank You @ytsarev , We will create the PR and let you know.
@bobh66 Yes, we can add the logic to delete the files when the Workspace is deleted.

@ccrockatt
Copy link

We are starting to consider how these logs can be viewed by kubernetes users as well as from CICD systems once they are stored to files so if there are preferred approaches please share them so that we can consider them in our brainstorming.

@balu-ce
Copy link

balu-ce commented Mar 18, 2024

If we write logs to EBS, S3 or EFS then it would become a cloud specific, right ?

@PavelPikat
Copy link

PavelPikat commented Mar 18, 2024

Could these logs not be sent to OTEL collector? Then the operator could configure any backend for logs - Grafana Loki, ELK, Datadog etc.

With OTEL we also get standard way for passing metadata - attributes and labels such as Workspace name, namespace, ID etc

@bobh66
Copy link
Collaborator

bobh66 commented Mar 18, 2024

If we write logs to EBS, S3 or EFS then it would become a cloud specific, right ?

EBS and EFS are storage CSIs so they would be invisible to the pod because all it sees is the associated PVC. S3 would require a sidecar or additional AWS-specific code so yes that would be cloud-specific.

@rvnyk
Copy link

rvnyk commented Mar 18, 2024

EBS and EFS are storage CSIs so they would be invisible to the pod because all it sees is the associated PVC. S3 would require a sidecar or additional AWS-specific code so yes that would be cloud-specific.

The implementation in this PR for sending CLI logs to a log file within the workspace folder is not cloud specific. The final choice on how to export the logs out of the workDir to the platforms logging provider(via agents) or to stream the logs out to an external file storage is dependent on the users choice. Does this address the concern of the implementation being cloud specific or pls let me know if there any other concerns. Thanks

@suramasamy
Copy link
Contributor

@bobh66 @ytsarev Could you please look at this PR for this issue when you get a chance?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment