Lithops with Knative as serverless compute backend. Lithops also supports vanilla Knative for running applications. The easiest way to make it working is to create an IBM Kubernetes (IKS) cluster through the IBM dashboard. Alternatively you can use your own kubernetes cluster or a minikube installation.
Note that Lithops automatically builds the default runtime the first time you run a script. For this task it uses the docker command installed locally in your machine. If for some reason you can't install the Docker CE package locally, you must provide the docker_token parameter in the configuration. This way lithops will use Tekton of your k8s cluster to build the default runtime to your docker hub account. In this case, omit steps 1 and 2.
-
Login to your docker account:
docker login
-
Choose one of these 2 installation options:
-
Access to the IBM dashboard and create a new Kubernetes cluster. For testing purposes, it is preferable to use this setup:
- Install Kubernetes >= v1.16
- Select a single zone to place the worker nodes
- Master service endpoint: Public endpoint only
- Your cluster must have 3 or more worker nodes with at least 4 cores and 16GB RAM.
- No need to encrypt local disk
-
Once the cluster is running, follow the instructions of the "Access" tab of the dashboard to configure the kubectl client in your local machine.
-
In the dashboard of your cluster, go to the "Add-ons" tab and install Knative. It automatically installs Istio and Tekton.
-
Install Kubernetes >= v1.16 and make sure the kubectl client is running.
-
Install the helm Kubernetes package manager in your local machine. Instructions can be found here.
-
Install the Knative environment into the k8s cluster:
curl http://cloudlab.urv.cat/knative/install_env.sh | bash
-
Make sure you have the ~/.kube/config file. Alternatively, you can set KUBECONFIG environment variable:
export KUBECONFIG=<path-to-kube-config-file>
-
Edit your lithops config and add the following keys:
lithops: backend: knative
To configure Lithops to access a private repository in your docker hub account, you need to extend the Knative config and add the following keys:
knative:
....
docker_user : <Docker hub Username>
docker_password : <DOcker hub access TOEKN>
To configure Lithops to access to a private repository in your IBM Container Registry, you need to extend the Knative config and add the following keys:
knative:
....
docker_server : us.icr.io
docker_user : iamapikey
docker_password : <IBM IAM API KEY>
Group | Key | Default | Mandatory | Additional info |
---|---|---|---|---|
knative | istio_endpoint | no | Istio IngressGateway Endpoint. Make sure to use http:// prefix | |
knative | kubecfg_path | no | Path to kubecfg file. Mandatory if config file not in ~/.kube/config or KUBECONFIG env var not present |
|
knative | docker_server | https://index.docker.io/v1/ | no | Docker server URL |
knative | docker_user | no | Docker hub username | |
knative | docker_password | no | Login to your docker hub account and generate a new access token here | |
knative | git_url | no | Git repository to build the image | |
knative | git_rev | no | Git revision to build the image | |
knative | runtime | no | Docker image name | |
knative | runtime_cpu | 0.5 | no | CPU limit. Default 0.5vCPU |
knative | runtime_memory | 256 | no | Memory limit in MB. Default 256Mi |
knative | runtime_timeout | 600 | no | Runtime timeout in seconds. Default 600 seconds |
knative | runtime_min_instances | 0 | no | Minimum number of parallel runtime instances |
knative | runtime_max_instances | 250 | no | Maximum number of parallel runtime instances |
knative | runtime_concurrency | 1 | no | Number of workers inside a single runtime instance |
knative | invoke_pool_threads | {lithops.workers} | no | Number of concurrent threads used for invocation |
-
Verify that all the pods from the following namespaces are in Running status:
kubectl get pods --namespace istio-system kubectl get pods --namespace knative-serving kubectl get pods --namespace knative-eventing kubectl get pods --namespace tekton-pipelines
-
Monitor how pods and other resources are created:
watch kubectl get pod,service,revision,deployment -o wide