diff --git a/examples/multitenancy/application-hosting/odoo/OdooService.yaml b/examples/multitenancy/application-hosting/odoo/OdooService.yaml deleted file mode 100644 index 63e0c25f..00000000 --- a/examples/multitenancy/application-hosting/odoo/OdooService.yaml +++ /dev/null @@ -1,48 +0,0 @@ -apiVersion: workflows.kubeplus/v1alpha1 -kind: ResourceComposition -metadata: - name: odooservice-res-composition -spec: - # newResource defines the new CRD to be installed define a workflow. - newResource: - resource: - kind: OdooService - group: platformapi.kubeplus - version: v1alpha1 - plural: odooservices - # URL of the Helm chart that contains Kubernetes resources that represent a workflow. - chartURL: file:///odoo-23.0.4.tgz - chartName: odoo-23.0.4.tgz - # respolicy defines the resource policy to be applied to instances of the specified custom resource. - respolicy: - apiVersion: workflows.kubeplus/v1alpha1 - kind: ResourcePolicy - metadata: - name: odooservice-res-policy - spec: - resource: - kind: OdooService - group: platformapi.kubeplus - version: v1alpha1 - policy: - # Add following requests and limits for the first container of all the Pods that are related via - # owner reference relationship to instances of resources specified above. - podconfig: - nodeSelector: values.nodeName - # resmonitor identifies the resource instances that should be monitored for CPU/Memory/Storage. - # All the Pods that are related to the resource instance through either ownerReference relationship, or all the relationships - # (ownerReference, label, annotation, spec properties) are considered in calculating the statistics. - # The generated output is in Prometheus format. - resmonitor: - apiVersion: workflows.kubeplus/v1alpha1 - kind: ResourceMonitor - metadata: - name: odooservice-res-monitor - spec: - resource: - kind: OdooService - group: platformapi.kubeplus - version: v1alpha1 - # This attribute indicates that Pods that are reachable through all the relationships should be used - # as part of calculating the monitoring statistics. - monitorRelationships: all \ No newline at end of file diff --git a/examples/multitenancy/application-hosting/odoo/steps.txt b/examples/multitenancy/application-hosting/odoo/steps.txt index 90db8ac4..5c160c65 100644 --- a/examples/multitenancy/application-hosting/odoo/steps.txt +++ b/examples/multitenancy/application-hosting/odoo/steps.txt @@ -5,7 +5,7 @@ This example shows delivering Bitnami Odoo Helm chart as-a-service using KubePlu 1. Download Odoo helm chart from Bitnami: $ helm repo add bitnami https://charts.bitnami.com/bitnami - $ helm pull bitnami/odoo + $ helm pull bitnami/odoo --version 23.0.4 2. Install KubePlus and setup KubePlus kubectl plugins: - Create provider kubeconfig: @@ -24,17 +24,11 @@ This example shows delivering Bitnami Odoo Helm chart as-a-service using KubePlu - Wait till KubePlus Pod is Running $ kubectl get pods -A - - Setup KubePlus kubectl plugins - $ wget https://github.com/cloud-ark/kubeplus/blob/master/kubeplus-kubectl-plugins.tar.gz - $ gunzip kubeplus-kubectl-plugins.tar.gz - $ tar -xvf kubeplus-kubectl-plugins - $ export KUBEPLUS_HOME=`pwd` - $ export PATH=$KUBEPLUS_HOME/plugins:$PATH 3. Create OdooService API wrapping the Helm chart: - Check odoo-service-composition-localchart.yaml. Notice that we are specifying the odoo chart from a file system based path. So first we have to upload this chart to KubePlus Pod. - $ kubectl upload chart odoo-23.0.4.tgz + $ kubectl upload chart odoo-23.0.4.tgz kubeplus-saas-provider.json $ kubectl create -f odoo-service-composition-localchart.yaml --kubeconfig=kubeplus-saas-provider.json $ kubectl get crds --kubeconfig=kubeplus-saas-provider.json - verify that odooservice crd has been created @@ -44,8 +38,8 @@ This example shows delivering Bitnami Odoo Helm chart as-a-service using KubePlu 4. Download the consumer kubeconfig file: - Direct $ kubectl get configmaps kubeplus-saas-consumer-kubeconfig -n $KUBEPLUS_NS -o jsonpath="{.data.kubeplus-saas-consumer\.json}" > consumer.conf - - Using kubeplus plugin (use when working on GKE) - $ kubectl retrieve kubeconfig consumer default -s https://$server:443 -k kubeplus-saas-provider.json > consumer.conf + - Using kubeplus plugin + $ kubectl retrieve kubeconfig consumer default -s $server -k kubeplus-saas-provider.json > consumer.conf 5. Check permissions for provider and consumer service accounts, which are created by KubePlus: $ kubectl auth can-i --list --as=system:serviceaccount:default:kubeplus-saas-provider diff --git a/examples/multitenancy/application-hosting/supabase/steps.txt b/examples/multitenancy/application-hosting/supabase/steps.txt index c3f2018d..db0ecbb3 100644 --- a/examples/multitenancy/application-hosting/supabase/steps.txt +++ b/examples/multitenancy/application-hosting/supabase/steps.txt @@ -4,21 +4,30 @@ Pre-requisites: - KubePlus kubectl plugins are available on PATH -Test Supase creation: +Test Supabase creation: - helm repo add bitnami https://charts.bitnami.com/bitnami -- helm pull bitnami/supabase -- Open Supabase.yaml and change chartURL to the version of supabase tgz that you received from helm pull. +- helm pull bitnami/supabase --version 0.1.4 +- kubectl upload chart supabase-0.1.4.tgz kubeplus-saas-provider.json +- python3 ../../../../provider-kubeconfig.py update default -p supabase-perms.json - kubectl create -f Supabase.yaml --kubeconfig=kubeplus-saas-provider.json - kubectl get resourcecompositions - kubectl describe resourcecomposition supabase-res-composition -- kubectl get crds +- Verify that supabases crd is registered in the cluster + - kubectl get crds - kubectl man Supabase -k kubeplus-saas-provider.json > sample-supabase.yaml - kubectl create -f sample-supabase.yaml - kubectl get supabases - kubectl describe supabase sample-supabase - kubectl get pods -A + Cleanup: +- Wait till the "Status" field is populated in the output of: + - kubectl describe supabase sample-supabase - kubectl delete supabase sample-supabase - kubectl get pods -A - kubectl delete -f Supabase.yaml --kubeconfig=kubeplus-saas-provider.json +- Verify resourcecomposition is deleted + - kubectl get resourcecompositions +- Verify all the Supabase CRDs are deleted + - kubectl get crds diff --git a/examples/multitenancy/application-hosting/supabase/supabase-perms.json b/examples/multitenancy/application-hosting/supabase/supabase-perms.json new file mode 100644 index 00000000..614f8aca --- /dev/null +++ b/examples/multitenancy/application-hosting/supabase/supabase-perms.json @@ -0,0 +1 @@ +{"perms": {"": [{"secrets/resourceName::kptc-jwt": ["create", "delete", "get", "list", "patch", "update", "watch"]}]}} diff --git a/examples/multitenancy/platform-engineering/kubeplus-saas-provider.json b/examples/multitenancy/platform-engineering/kubeplus-saas-provider.json deleted file mode 100644 index 95fdeb5d..00000000 --- a/examples/multitenancy/platform-engineering/kubeplus-saas-provider.json +++ /dev/null @@ -1 +0,0 @@ -{"apiVersion": "v1", "kind": "Config", "users": [{"name": "kubeplus-saas-provider", "user": {"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IkRlczJaQ0l6Mk9xWW9MeWR6eUhHN2gwT3FFeWtHeGtGRU1zOUs3aTJyeEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVwbHVzLXNhYXMtcHJvdmlkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZXBsdXMtc2Fhcy1wcm92aWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZjNGNlMWFlLTNlYWYtNGRkNy04MmVjLTQ3NTY3ZWU0NWMzMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVwbHVzLXNhYXMtcHJvdmlkZXIifQ.h1DbP9eLK5gsS3kvcmb5prNcP3rMM96RhNvTg_98_EVNM0iRgjyOS7Nj4Cs5iq2CPqpbOTdU9ObXmRw4uGViTFO6WpYuRccgpUl9rdtAxd9McGvLzYfJY6DOiWhBNfGg8aNe2hD6OlMORc8hPhDqT21j97ZgQVDro3dSQERNd_pr7G3OLkGOm0T97j9CbZjfoOEv47eDw1s61AmT_SJJ6JRvMnP8xV2JVoElIRFe38QJys4UMEhrNvv0Z0IJGvIO2z3WGU2B_KS9bX9fDkio4fsJAUMwjX84kLIUBNa_vM58L1GpgZeqmPCKuJE9B3cRfLUr5gX16AOPk9qUoSMplQ"}}], "clusters": [{"cluster": {"server": "https://192.168.49.2:8443", "insecure-skip-tls-verify": true}, "name": "kubeplus-saas-provider"}], "contexts": [{"context": {"cluster": "kubeplus-saas-provider", "user": "kubeplus-saas-provider", "namespace": "default"}, "name": "kubeplus-saas-provider"}], "current-context": "kubeplus-saas-provider"} \ No newline at end of file diff --git a/examples/multitenancy/platform-engineering/steps.txt b/examples/multitenancy/platform-engineering/steps.txt index 8a92fc45..423f5e81 100644 --- a/examples/multitenancy/platform-engineering/steps.txt +++ b/examples/multitenancy/platform-engineering/steps.txt @@ -33,13 +33,6 @@ Platform Engineering team - Wait till KubePlus Pod is Running $ kubectl get pods -A - - Setup KubePlus kubectl plugins - $ wget https://github.com/cloud-ark/kubeplus/blob/master/kubeplus-kubectl-plugins.tar.gz - $ gunzip kubeplus-kubectl-plugins.tar.gz - $ tar -xvf kubeplus-kubectl-plugins - $ export KUBEPLUS_HOME=`pwd` - $ export PATH=$KUBEPLUS_HOME/plugins:$PATH - 4. Create CustomMysqlService API wrapping the Helm chart: - Check custom-mysql-service-composition-localchart.yaml. Notice that we are specifying our custom mysql chart from a file system based path. @@ -60,45 +53,54 @@ Platform Engineering team Product team ------------- -Setup kubectl kubectl plugins by following the corresponding steps mentioned in the above section. +1. Setup kubeplus kubectl plugins: + + $ wget https://github.com/cloud-ark/kubeplus/blob/master/kubeplus-kubectl-plugins.tar.gz + $ gunzip kubeplus-kubectl-plugins.tar.gz + $ tar -xvf kubeplus-kubectl-plugins + $ export KUBEPLUS_HOME=`pwd` + $ export PATH=$KUBEPLUS_HOME/plugins:$PATH -1. Check details of CustomMysqlService API: +2. Check details of CustomMysqlService API: $ kubectl explain CustomMysqlService --kubeconfig=consumer.conf $ kubectl explain CustomMysqlService.spec.mysql.auth --kubeconfig=consumer.conf -2. Retrieve sample CustomMysqlService resource: +3. Retrieve sample CustomMysqlService resource: $ kubectl man CustomMysqlService -k consumer.conf - this will show a sample custommysqlservice object in which the spec properties are attributes in the CustomMysql Helm chart's values.yaml file $ kubectl man CustomMysqlService -k consumer.conf > sample-custom-mysql.yaml -3. Create Custom MySQL instance: +4. Create Custom MySQL instance: - Open sample-custom-mysql.yaml and change the name to "prod-mysql" - Open sample-custom-mysql.yaml and update the username and password. $ kubectl create -f sample-custom-mysql.yaml --kubeconfig=consumer.conf - - verify that the MySQL Pod iscreated in a new namespace (kubectl get pods -A) + - verify that the MySQL Pod is created in a new namespace (kubectl get pods -A) -4. Check the created resources: +5. Check the created resources: $ kubectl appresources CustomMysqlService prod-mysql -k consumer.conf - this will show all the resources that KubePlus has created for the custom mysql instance -5. Check logs: +6. Check logs: $ kubectl applogs CustomMysqlService prod-mysql default -k consumer.conf -6. Check metrics: +7. Check metrics: $ kubectl metrics CustomMysqlService prod-mysql default -k consumer.conf $ kubectl metrics CustomMysqlService prod-mysql default -k consumer.conf -o prometheus Clean up: --------- +Product team: $ kubectl delete -f sample-custom-mysql.yaml --kubeconfig=consumer.conf + +Platform Engineering team: $ kubectl delete -f custom-mysql-service-composition-localchart.yaml --kubeconfig=kubeplus-saas-provider.json Key Takeaways: -------------- -As Platform Engineering team, you don't want to share Helm charts with your product teams and then you won't +As Platform Engineering team, you don't want to share Helm charts with your product teams as then you won't have any control on how the Helm chart values will be modified. Wrapping an API around your Helm chart, and sharing this API with the product teams solves this problem.