We'd love to accept your patches and contributions to this project. We use this GitHub project as our primary source of truth and the main development repository for Config Connector. The source code in this project is also mirrored to internal Google repository for the purposes of releases.
Contributions to this project must be accompanied by a Contributor License Agreement. You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project. Head over to https://cla.developers.google.com/ to see your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one (even if it was for a different project), you probably don't need to do it again.
All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.
This project follows Google's Open Source Community Guidelines.
You need to set up your own DEV environment before contributing to this project.
We follow the typical contribution flow similar to most OSS projects on GitHub.
Export the GITHUB_USERNAME
environment variable which will be used in subsequent
steps.
export GITHUB_USERNAME=YOUR_USERNAME
We follow the Fork and pull model in GitHub.
You need to first fork this repository, and you can later on open a pull request to propose changes from your own fork to the master branch in this source repository.
GitHub provides detailed instructions in Fork a repo. In summary, you perform the follow steps to get your fork ready:
-
Set up Git and authentication with GitHub.com.
https://docs.github.com/en/get-started/quickstart/set-up-git
-
Fork the
k8s-config-connector
repo. Instructions below assumes you also name your fork ask8s-config-connector
. If you use a different name for the fork, you should replace the commands with the right name.https://docs.github.com/en/get-started/quickstart/fork-a-repo#forking-a-repository
-
Clone your forked repo to your dev machine.
https://docs.github.com/en/get-started/quickstart/fork-a-repo#cloning-your-forked-repository
We recommend you to create the local clone under the path
~/go/src/github.com/$GITHUB_USERNAME
. This will help to avoid a few known build frictions related to generated code.mkdir -p ~/go/src/github.com/$GITHUB_USERNAME cd ~/go/src/github.com/$GITHUB_USERNAME git clone https://github.com/$GITHUB_USERNAME/k8s-config-connector # If you use ssh key auth, this will be [email protected]:$GITHUB_USERNAME/k8s-config-connector.git
Note: Some of the environment setup scripts will try to write to the bash
~/.profile
and source it. You may need to modify those lines to suit your shell environment.
Once you have cloned your forked repo, you can use some helper scripts in the repo to quickly set up a local dev environment.
-
Make sure you have gcloud installed and configured with a default GCP project. Confirm that you have the role of either an editor or owner in this project.
-
Make sure you have at least 30 GB of free disk size.
-
Update apt and install build-essential.
sudo apt-get update sudo apt install build-essential
-
Change to environment-setup directory.
cd ~/go/src/github.com/$GITHUB_USERNAME/k8s-config-connector/scripts/environment-setup
-
Set up sudoless Docker.
./docker-setup.sh
-
Exit your current session, then SSH back into the VM. Then run the following to ensure you have set up sudoless docker correctly:
docker run hello-world
-
Install Golang.
cd ~/go/src/github.com/$GITHUB_USERNAME/k8s-config-connector/scripts/environment-setup ./golang-setup.sh source ~/.profile
-
Install other build dependencies.
./repo-setup.sh source ~/.profile
-
Set up a GKE cluster for testing purposes. This script takes a long time to run, it assumes there is a GKE cluster named "cnrm-dev" in your default GCP project configured through gcloud, and creates one if it doesn't exist.
If you prefer to use an existing GKE cluster, you can modify
CLUSTER_NAME
in the script and use the existing cluster name instead, which will reduce the time it takes. Make sure the existing GKE cluster has workload identity enabled../gcp-setup.sh
-
(Optional) Verify that worload identity federation is setup correctly.
-
Now that you have everything set up, you can build your own images and then deploy the Config Connector CRDs and workloads (including controller manager, webhooks, etc...) into your test GKE cluster.
-
Note deploying 300+ CRDs into your test cluster can take a long time to complete. If you are only testing/fixing issues for a few CRDs. You can instead just apply the CRDs you are going to work on. As an example, we want to deploy CRD
ArtifactRegistryRepositories
because we want to validate creation of this resource in the next step. So we can do:cd ~/go/src/github.com/$GITHUB_USERNAME/k8s-config-connector make manifests kubectl apply -f config/crds/resources/apiextensions.k8s.io_v1_customresourcedefinition_artifactregistryrepositories.artifactregistry.cnrm.cloud.google.com.yaml
-
We need to install the following two CRDs as they are hard dependencies to reconcile all the other supported CRDs:
kubectl apply -f operator/config/crd/bases/core.cnrm.cloud.google.com_configconnectors.yaml kubectl apply -f operator/config/crd/bases/core.cnrm.cloud.google.com_configconnectorcontexts.yaml
-
Then we build/push the locally built images and deploy the workloads using the command below:
make deploy-controller
-
If you want to install config connector on a brand new GKE cluster, the following command will install all CRDs, locally build, push and deploy all workloads to a standard GKE cluster.
make deploy-kcc-standard make install
For autopilot clusters, please use the following command.
make deploy-kcc-autopilot make install
-
The script gcp-setup.sh
annotates your default
namespace in the GKE cluster
with a
project-id
annotation equal to your default GCP project id in gcloud. This enables Config
Connector to create GCP resources in that default GCP project. We can validate
by creating an Artifact Registry resource through Config Connector.
-
Enable Artifact Registry for your project.
gcloud services enable artifactregistry.googleapis.com
-
Create a GCP ArtifactRegistryRepository resource. You can check if the workloads are ready by:
kubectl get pods -n cnrm-system
Then you can create a new ArtifactRegistryRepository resource:
kubectl apply -f config/samples/resources/artifactregistryrepository/artifactregistry_v1beta1_artifactregistryrepository.yaml
-
Wait a few minutes and then make sure your repository exists in GCP.
gcloud artifacts repositories list
If you see a repository named
artifactregistryrepository-sample
, then your cluster is properly functioning and actuating K8s resources onto GCP.
You can look for error logs by checking the controller logs following the troubleshooting.
When the cluster is created without providing a service account, a Compute Engine service account is created for the cluster. Users must grant the service account permission to pull images from the project registry.
-
Find the Compute Engine service account.
gcloud iam service-accounts list | grep "Compute Engine default service account"
-
Grant service account read permission.
gcloud projects add-iam-policy-binding [PROJECT_ID] \ --member="[SERVICE_ACCOUNT]" --role="roles/storage.objectViewer"
Make sure that the cnrm.cloud.google.com/project-id
annotation is replaced with your PROJECT_ID in the sample "artifactregistry_v1beta1_artifactregistryrepository.yaml". More detail can be found in documentation.
kubectl apply -f operator/config/crd/bases/core.cnrm.cloud.google.com_configconnectors.yaml
kubectl apply -f operator/config/crd/bases/core.cnrm.cloud.google.com_configconnectorcontexts.yaml
make deploy-controller && kubectl delete pods --namespace cnrm-system --all
At this point, your cluster is running a CNRM Controller Manager image built on your system. Let's make a code change to verify that you are ready to start development.
Edit cmd/manager/main.go in your local repository. Insert the log.Printf(...)
statement below on the first line of the main()
function.
package manager
func main() {
log.Printf("I have finished the getting started guide.")
...
}
To apply the change, you can either deploy the container image into the GKE Cluster, or run the Controller Manager directly as a local executable.
Build and deploy your change, force a pull of the container image.
make deploy-controller && kubectl delete pods --namespace cnrm-system --all
Verify your new log statement is on the first line of the logs for the CNRM Controller Manager pod.
kubectl --namespace cnrm-system logs cnrm-controller-manager-0
If you don't want to deploy the controller manager into your dev cluster, you can run it locally on your dev machine with the steps below.
-
Get credentials for the cnrm-controller-manager service account.
First, you need to create a long-lived API token.
kubectl -n cnrm-system apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: cnrm-controller-manager-secret annotations: kubernetes.io/service-account.name: cnrm-controller-manager type: kubernetes.io/service-account-token EOF
Then, create a kubeconfig using the API token.
set -o errexit kubectx=$(kubectl config current-context) server=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"${kubectx}\")].cluster.server}") clusterName='cnrm-dev' namespace='cnrm-system' serviceAccount='cnrm-controller-manager' secretName='cnrm-controller-manager-secret' ca=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.ca\.crt}') token=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.token}' | base64 --decode) cat << EOF >> ~/.kube/cnrm-dev-controller-manager --- apiVersion: v1 kind: Config clusters: - name: ${clusterName} cluster: certificate-authority-data: ${ca} server: ${server} contexts: - name: ${serviceAccount}@${clusterName} context: cluster: ${clusterName} namespace: ${namespace} user: ${serviceAccount} users: - name: ${serviceAccount} user: token: ${token} current-context: ${serviceAccount}@${clusterName} EOF
-
kubectl edit statefulset cnrm-controller-manager -n cnrm-system
and scale down the replica to 0. -
Run
KUBECONFIG=~/.kube/cnrm-dev-controller-manager make run
and inspect the output logs.
If you are adding a new resource, you need to follow the steps in NewResourceFromTerraform.md to make code changes, add test data, and run the tests for your resource.
If you are working on a existing resource, test yaml should exist under ./pkg/test/resourcefixture/testdata/basic, you can run the test command directly to make sure the test can still pass. Example command:
# Export the environment variables needed in the dynamic tests if you haven't done it.
TEST_FOLDER_ID=123456789 go test -v -tags=integration ./pkg/controller/dynamic/ -test.run TestCreateNoChangeUpdateDelete -run-tests cloudschedulerjob -timeout 900s
Replace cloudschedulerjob
with your test target.
At this point you already knows how to make changes and verify it in your local dev environment. When you have tested your change and are ready to submit a PR, you can first validate the change locally:
make ready-pr
You can then commit your change and make a pull request. See GitHub's contributing to projects: making and pushing changes.