Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add image sync workflow #5

Open
wants to merge 55 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
c332c8f
Add skopeo sync workflow
May 9, 2023
bd8d12f
Trigger workflow on push
May 9, 2023
2e4afcd
Fix filenames in test matrix
May 9, 2023
d736820
Fix filenames in test matrix
May 9, 2023
73be504
Fix manifest format
May 9, 2023
a77d8fb
Fix missing '/' separators
May 9, 2023
47d3179
Check skopeo version
May 9, 2023
6c4e798
Test docker option
May 9, 2023
5f7d688
Use containerised skopeo
May 9, 2023
c9cfd65
Fix container version
May 9, 2023
c156128
Run sync in container
May 9, 2023
3ed77fb
Fix path
May 9, 2023
50ca070
Checkout correct branch in workflow
May 9, 2023
80473c7
Debug test
May 10, 2023
f6c940a
Debug test
May 10, 2023
3021f82
Debug test
May 10, 2023
17d4d89
Fix container mount path
May 10, 2023
5d18b20
Add check for changes
May 10, 2023
e4bf910
Fix check for changes
May 10, 2023
0b93f3f
Address requested changes:
May 11, 2023
88af872
Test output capture
May 11, 2023
d9a0a0f
Fix typo
May 11, 2023
126b4df
Write to file instead
May 11, 2023
8f06178
Write stderr to file too
May 11, 2023
d33dadb
Add error checking for sync output
May 11, 2023
63982ca
Duplicate sync output with tee
May 11, 2023
d0d9f05
Also pipe stderr to tee
May 11, 2023
96705ca
Fix sha256 tag handling
May 11, 2023
91ad805
Re-enable manifest diff checking
May 11, 2023
32f5dd2
Re-enable manifest diff checking properly
May 11, 2023
520515b
Add note about notebook images
May 15, 2023
438b2e0
Add notebook images
May 15, 2023
ad6a682
Formatting
May 15, 2023
1de9be4
Shorten manifest names
May 25, 2023
0b639fd
Update file list
May 25, 2023
354de67
Comments and formatting
May 25, 2023
a2a7b1d
Be explicit about defaulting to docker.io
May 30, 2023
6222b28
Merge branch 'main' into skopeo
May 30, 2023
231f93a
Update ignores
May 30, 2023
d442ec6
Use stackhpc ghcr container images
May 30, 2023
807ac24
Formatting
May 30, 2023
d5da99e
Fix traefik image config
May 30, 2023
29b285a
Fix container image paths
May 30, 2023
c1a5f32
Fix ghcr image paths
May 30, 2023
e8c502e
Formatting
May 30, 2023
295f1ae
Comments and formatting
May 30, 2023
65f97ce
Force ghcr images as kustomization instead
May 31, 2023
0b7566a
Don't hard-code image tag
May 31, 2023
7cc3fcd
Ignore vscode dir
May 31, 2023
4900d5d
Remove explicit image tags
May 31, 2023
34aac8b
Fix python & mysql image names
May 31, 2023
724ef7e
Handle images defined in config maps
Jun 15, 2023
3d86b3f
Update comments
Jun 15, 2023
60b39ea
Fix linux sed usage
Jun 15, 2023
6aa158d
Fix linux sed usage properly
Jun 15, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions .github/workflows/sync-images.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
name: sync images
on: push

jobs:
sync_images:
runs-on: ubuntu-latest
strategy:
matrix:
component:
- daskhub
- jupyterhub
- kubeflow
steps:
- name: Check out the repository
uses: actions/checkout@v3
# TODO: Revert to main branch before merging
with:
ref: skopeo
Comment on lines +16 to +18
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sd109 We just need to do what this says :)


# Uncommment for SSH-able terminal session within runner to aid debugging
# - uses: actions/checkout@v2
# - name: Setup upterm session
# uses: lhotari/action-upterm@v1

- name: Check component manifest for changes
uses: dorny/paths-filter@v2
id: changes
with:
filters: |
manifest:
- skopeo-manifests/${{ matrix.component }}.yml

# NOTE: Need skopeo > v1.09 to avoid this issue:
# https://github.com/containers/skopeo/issues/1874
# so use containerized version

- name: Sync component images
id: image-sync
if: ${{ steps.changes.outputs.manifest == 'true' }}
run: |-
docker run --entrypoint /usr/bin/env \
-v $GITHUB_WORKSPACE:/home/skopeo/azimuth-charts \
quay.io/skopeo/stable:v1.11 \
skopeo sync \
--src yaml \
--dest docker \
--dest-creds ${{ github.actor }}:${{ secrets.GITHUB_TOKEN }} \
--scoped \
--all \
/home/skopeo/azimuth-charts/skopeo-manifests/${{ matrix.component }}.yml \
ghcr.io/stackhpc/azimuth-charts \
|& tee sync-output.txt

- name: Check output for any sync errors
run: |
ERR_COUNT=$(grep "level=error" sync-output.txt | wc -l)
if [[ $ERR_COUNT -gt 0 ]]; then
echo "Found $ERR_COUNT logged error messages in output of image sync step"
exit 1
fi
5 changes: 3 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
*/charts/*
*/Chart.lock
.vscode
# Ignore `helm dependency build` output
kubeflow-azimuth/kubeflow-azimuth-chart/**
kubeflow-azimuth/kubeflow-crds/**
kubeflow-azimuth/kubeflow-azimuth-chart
kubeflow-azimuth/kubeflow-crds
kubeflow-azimuth/kustomize-build-output.yml
13 changes: 13 additions & 0 deletions daskhub-azimuth/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ daskhub:
String("image", default="pangeo/base-notebook:2022.05.10", label="Image"),
handler=option_handler,
)
# Use stackhpc's ghcr image copy
traefik:
image:
name: ghcr.io/stackhpc/azimuth-charts/docker.io/library/traefik

jupyterhub:
prePuller:
Expand All @@ -32,10 +36,16 @@ daskhub:
service:
type: ClusterIP
chp:
# Use stackhpc's ghcr image copy
image:
name: ghcr.io/stackhpc/azimuth-charts/docker.io/jupyterhub/configurable-http-proxy
networkPolicy:
enabled: false

hub:
# Use stackhpc's ghcr image copy
image:
name: ghcr.io/stackhpc/azimuth-charts/docker.io/jupyterhub/k8s-hub
networkPolicy:
enabled: false
# Configure the authentication to respect the X-Remote-User header sent by Zenith from Azimuth
Expand Down Expand Up @@ -81,6 +91,9 @@ daskhub:
c.JupyterHub.authenticator_class = RemoteUserAuthenticator

singleuser:
# Use stackhpc's ghcr image copy
image:
name: ghcr.io/stackhpc/azimuth-charts/docker.io/pangeo/base-notebook
networkPolicy:
enabled: false
defaultUrl: /lab
Expand Down
9 changes: 9 additions & 0 deletions jupyterhub-azimuth/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,16 @@ jupyterhub:
service:
type: ClusterIP
chp:
# Use stackhpc's ghcr image copy
image:
name: ghcr.io/stackhpc/azimuth-charts/docker.io/jupyterhub/configurable-http-proxy
networkPolicy:
enabled: false

hub:
# Use stackhpc's ghcr image copy
image:
name: ghcr.io/stackhpc/azimuth-charts/docker.io/jupyterhub/k8s-hub
networkPolicy:
enabled: false
extraConfig:
Expand Down Expand Up @@ -63,6 +69,9 @@ jupyterhub:
c.JupyterHub.authenticator_class = RemoteUserAuthenticator

singleuser:
# Use stackhpc's ghcr image copy
image:
name: ghcr.io/stackhpc/azimuth-charts/docker.io/jupyterhub/k8s-singleuser-sample
networkPolicy:
enabled: false
defaultUrl: /lab
Expand Down
26 changes: 24 additions & 2 deletions kubeflow-azimuth/build-chart.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,28 @@ if [[ ! $(kustomize version) == *v5.* ]]; then
echo "Please install a valid version then try again."
exit 1
fi
kustomize build overlay/ --output kustomize-build-output.yml

OUTPUT_FILE=kustomize-build-output.yml
kustomize build overlay/ --output $OUTPUT_FILE

# NOTE(scott): kustomize image source patches don't capture
# default notebook images used by kubeflow jupyterhub platform
# since these are defined within the data.'spawner_ui_config.yaml'
# field of the ConfigMap 'jupyter-web-app-config-xxxxxxx'
# Use sed here to replace these images with ghcr versions
IMAGES=(
"jupyter-scipy"
"jupyter-pytorch-full"
"jupyter-pytorch-cuda-full"
"jupyter-tensorflow-full"
"jupyter-tensorflow-cuda-full"
)
for image in ${IMAGES[@]}; do
# Backup suffix is required on MacOS
sed -i.bak "s|kubeflownotebookswg/${image}|ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/${image}|g" $OUTPUT_FILE
# suffix to -i option is mandatory on MacOS sed, remove backup file here
rm $OUTPUT_FILE.bak
done

# Convert kustomize output to helm chart directory structure
python3 to-helm-chart.py
# git add ../kubeflow-azimuth-chart
127 changes: 126 additions & 1 deletion kubeflow-azimuth/overlay/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,129 @@ patches:
value: autoscaling/v2
target:
kind: HorizontalPodAutoscaler
version: v2beta2
version: v2beta2

# Use StackHPC's ghcr for relevant container images
images:
- name: docker.io/istio/pilot
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/istio/pilot
- name: docker.io/istio/proxyv2
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/istio/proxyv2
- name: docker.io/kubeflowkatib/katib-controller
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflowkatib/katib-controller
- name: docker.io/kubeflowkatib/katib-db-manager
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflowkatib/katib-db-manager
- name: docker.io/kubeflowkatib/katib-ui
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflowkatib/katib-ui
- name: docker.io/kubeflownotebookswg/centraldashboard
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/centraldashboard
- name: docker.io/kubeflownotebookswg/jupyter-web-app
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/jupyter-web-app
- name: docker.io/kubeflownotebookswg/kfam
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/kfam
- name: docker.io/kubeflownotebookswg/notebook-controller
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/notebook-controller
- name: docker.io/kubeflownotebookswg/poddefaults-webhook
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/poddefaults-webhook
- name: docker.io/kubeflownotebookswg/profile-controller
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/profile-controller
- name: docker.io/kubeflownotebookswg/tensorboard-controller
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/tensorboard-controller
- name: docker.io/kubeflownotebookswg/tensorboards-web-app
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/tensorboards-web-app
- name: docker.io/kubeflownotebookswg/volumes-web-app
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/volumes-web-app
- name: docker.io/metacontrollerio/metacontroller
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/metacontrollerio/metacontroller
- name: gcr.io/arrikto/kubeflow/oidc-authservice
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/arrikto/kubeflow/oidc-authservice
- name: gcr.io/knative-releases/knative.dev/eventing/cmd/controller
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/eventing/cmd/controller
- name: gcr.io/knative-releases/knative.dev/eventing/cmd/webhook
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/eventing/cmd/webhook
- name: gcr.io/knative-releases/knative.dev/net-istio/cmd/controller
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/net-istio/cmd/controller
- name: gcr.io/knative-releases/knative.dev/net-istio/cmd/webhook
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/net-istio/cmd/webhook
- name: gcr.io/knative-releases/knative.dev/serving/cmd/activator
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/serving/cmd/activator
- name: gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler
- name: gcr.io/knative-releases/knative.dev/serving/cmd/controller
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/serving/cmd/controller
- name: gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping-webhook
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping-webhook
- name: gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping
- name: gcr.io/knative-releases/knative.dev/serving/cmd/webhook
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/knative-releases/knative.dev/serving/cmd/webhook
- name: gcr.io/kubebuilder/kube-rbac-proxy
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/kubebuilder/kube-rbac-proxy
- name: gcr.io/kubebuilder/kube-rbac-proxy
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/kubebuilder/kube-rbac-proxy
- name: gcr.io/ml-pipeline/api-server
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/api-server
- name: gcr.io/ml-pipeline/cache-server
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/cache-server
- name: gcr.io/ml-pipeline/frontend
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/frontend
- name: gcr.io/ml-pipeline/metadata-envoy
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/metadata-envoy
- name: gcr.io/ml-pipeline/metadata-writer
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/metadata-writer
- name: gcr.io/ml-pipeline/minio
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/minio
- name: gcr.io/ml-pipeline/mysql
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/mysql
- name: gcr.io/ml-pipeline/persistenceagent
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/persistenceagent
- name: gcr.io/ml-pipeline/scheduledworkflow
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/scheduledworkflow
- name: gcr.io/ml-pipeline/viewer-crd-controller
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/viewer-crd-controller
- name: gcr.io/ml-pipeline/visualization-server
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/visualization-server
- name: gcr.io/ml-pipeline/workflow-controller
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/ml-pipeline/workflow-controller
- name: gcr.io/tfx-oss-public/ml_metadata_store_server
newName: ghcr.io/stackhpc/azimuth-charts/gcr.io/tfx-oss-public/ml_metadata_store_server
- name: docker.io/kserve/kserve-controller
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kserve/kserve-controller
- name: docker.io/kserve/models-web-app
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kserve/models-web-app
- name: docker.io/kubeflow/training-operator
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflow/training-operator
- name: docker.io/kubeflownotebookswg/jupyter-pytorch-cuda-full
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/jupyter-pytorch-cuda-full
- name: docker.io/kubeflownotebookswg/jupyter-pytorch-full
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/jupyter-pytorch-full
- name: docker.io/kubeflownotebookswg/jupyter-scipy
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/jupyter-scipy
- name: docker.io/kubeflownotebookswg/jupyter-tensorflow-cuda-full
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/jupyter-tensorflow-cuda-full
- name: docker.io/kubeflownotebookswg/jupyter-tensorflow-full
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/kubeflownotebookswg/jupyter-tensorflow-full
- name: mysql
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/library/mysql
- name: python
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/library/python
- name: quay.io/jetstack/cert-manager-cainjector
newName: ghcr.io/stackhpc/azimuth-charts/quay.io/jetstack/cert-manager-cainjector
- name: quay.io/jetstack/cert-manager-controller
newName: ghcr.io/stackhpc/azimuth-charts/quay.io/jetstack/cert-manager-controller
- name: quay.io/jetstack/cert-manager-webhook
newName: ghcr.io/stackhpc/azimuth-charts/quay.io/jetstack/cert-manager-webhook
- name: registry.k8s.io/coredns/coredns
newName: ghcr.io/stackhpc/azimuth-charts/registry.k8s.io/coredns/coredns
- name: registry.k8s.io/etcd
newName: ghcr.io/stackhpc/azimuth-charts/registry.k8s.io/etcd
- name: registry.k8s.io/kube-apiserver
newName: ghcr.io/stackhpc/azimuth-charts/registry.k8s.io/kube-apiserver
- name: registry.k8s.io/kube-controller-manager
newName: ghcr.io/stackhpc/azimuth-charts/registry.k8s.io/kube-controller-manager
- name: registry.k8s.io/kube-proxy
newName: ghcr.io/stackhpc/azimuth-charts/registry.k8s.io/kube-proxy
- name: registry.k8s.io/kube-scheduler
newName: ghcr.io/stackhpc/azimuth-charts/registry.k8s.io/kube-scheduler
- name: docker.io/tensorflow/tensorflow
newName: ghcr.io/stackhpc/azimuth-charts/docker.io/tensorflow/tensorflow
Comment on lines +43 to +165
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of these images belong to the underlying Kubernetes rather than kubeflow.

We should just grep the generated kubeflow manifests for image: and see which actually belong to kubeflow. Alternatively, we could do a diff before and after installing kubeflow to determine exactly which, but things like kube-apiserver, kube-controller-manager, kube-proxy and kube-scheduler shouldn't be in there.

47 changes: 24 additions & 23 deletions kubeflow-azimuth/to-helm-chart.py
Original file line number Diff line number Diff line change
@@ -1,24 +1,25 @@
import yaml, re, shutil
from pathlib import Path


def make_helm_chart_template(chart_path, chart_yml, values_yml):
"""Creates a template directory structure for a helm chart"""
print('Creating Helm chart at', chart_path.absolute())
# Remove any existing content at chart path
print("Creating Helm chart at", chart_path.absolute())
# Remove any existing content at chart path
# TODO: Add user confirmation and/or --force cmd line arg for deletion?
try:
shutil.rmtree(chart_path)
except FileNotFoundError:
pass
# Create Helm chart directory structure
chart_path.mkdir()
(chart_path / 'templates').mkdir()
(chart_path / 'crds').mkdir()
(chart_path / "templates").mkdir()
(chart_path / "crds").mkdir()
# Write Chart.yaml
with open(chart_path / 'Chart.yaml', 'w') as file:
with open(chart_path / "Chart.yaml", "w") as file:
file.write(chart_yml)
# Write values.yaml
with open(chart_path / 'values.yaml', 'w') as file:
with open(chart_path / "values.yaml", "w") as file:
file.write(values_yml)


Expand Down Expand Up @@ -57,40 +58,40 @@ def make_helm_chart_template(chart_path, chart_yml, values_yml):
"required": []
}
"""
with open(main_chart_path / 'values.schema.json', 'w') as schema_file:
with open(main_chart_path / "values.schema.json", "w") as schema_file:
schema_file.write(json_schema)

# Write manifest files
with open('kustomize-build-output.yml', 'r') as input_file:
with open("kustomize-build-output.yml", "r") as input_file:
# NOTE: Read input file as str instead of yaml to preserve newlines
# all_manifests = yaml.load_all(input_file)
# all_manifests = yaml.load_all(input_file)
all_manifests = input_file.read().split("\n---\n")
for i, manifest_str in enumerate(all_manifests):

# Convert to yaml for field queries
manifest = yaml.safe_load(manifest_str)

# NOTE: CRDs and namespaces are placed in separate sub-chart since trying to
# bundle all manifests into a single helm chart creates a helm release secret
# > 1MB which etcd then refuses to store so installation fails
manifest_name = manifest['metadata']['name'].replace('.', '-') + f'-{i+1}.yml'
if manifest['kind'] == 'CustomResourceDefinition':
manifest_path = crd_chart_path / 'crds' / manifest_name
elif manifest['kind'] == 'Namespace':
manifest_path = crd_chart_path / 'templates' / manifest_name
manifest_name = manifest["metadata"]["name"].replace(".", "-") + f"-{i+1}.yml"
if manifest["kind"] == "CustomResourceDefinition":
manifest_path = crd_chart_path / "crds" / manifest_name
elif manifest["kind"] == "Namespace":
manifest_path = crd_chart_path / "templates" / manifest_name
else:
manifest_path = main_chart_path / 'templates' / manifest_name
print(f'{i+1}.\t Writing {manifest_path}')
manifest_path = main_chart_path / "templates" / manifest_name
print(f"{i+1}.\t Writing {manifest_path}")

# NOTE: Some manifest files have '{{' and '}}' instances in comments
# These need to be escaped so that helm doesn't try to template them
# Regex should match everying within a curly bracket that isn't a curly bracket itself
manifest_str = re.sub(r"{{([^\{\}]*)}}", r'{{ "{{" }}\1{{ "}}" }}', manifest_str)
manifest_str = re.sub(
r"{{([^\{\}]*)}}", r'{{ "{{" }}\1{{ "}}" }}', manifest_str
)

# Write manifest to file
# NOTE: Avoid using yaml.dumps here as it doesn't properly preserve multi-line
# yaml blocks (e.g. key: | \n ...) and instead replaces all newlines with '\n'
# yaml blocks (e.g. key: | \n ...) and instead replaces all newlines with '\n'
# inside blocks, making final manifests less readable.
with open(manifest_path, 'w') as output_file:
with open(manifest_path, "w") as output_file:
output_file.write(manifest_str)

Loading