The following is meant to guide you through running Hyperledger Besu or GoQuorum clients in Azure AKS (Kuberentes) in both development and production scenarios. As always you are free to customize the charts to suit your requirements. It is highly recommended to familiarize yourself with AKS (or equivalent Kubernetes infrastructure) before running things in production on Kubernetes.
It essentially comprises base infrastructure that is used to build the cluster & other resources in Azure via an ARM template. We also make use some Azure native services and features (tha are are provisioned via a script) after the cluster is created. These incluide:
- AAD pod identities.
- Secrets Store CSI drivers
- Data is stored using dynamic StorageClasses backed by Azure Files. Please note the Volume Claims are fixed sizes and can be updated as you grow via a helm update, and will not need reprovisioning of the underlying storage class.
- CNI networking mode for AKS. By default, AKS clusters use kubenet, and a virtual network and subnet are created for you. With kubenet, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use, however places constraints on what can connect to the nodes from outside the cluster (eg on prem nodes)
With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and can leads to IP address exhaustion as your application demands grow, however makes it easier for external nodes to connect to your cluster.
If you have existing VNets, you can easily connect to the VNet with the k8s cluster by using VNet Peering
- Read this file in its entirety before proceeding
- See the Prerequisites section to enable some features before doing the deployment
- Use the Usage section
The dev charts are aimed at getting you up and running so you can experiment with the client and functionality of the tools, contracts etc. They embed node keys etc as secrets so that these are visible to you during development and you can learn about discovery. The prod charts utilize all the built in Azure functionality and recommended best practices such as identities, secrets stored in keyvault with limited access etc. When using the prod charts please ensure you add the necessary values to the azure
section of the values.yml file
- Please do not create more than one AKS cluster in the same subnet.
- AKS clusters may not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes service address range.
You will need to run these in your Azure subscription before any deployments.
For this deployment we will provision AKS with CNI and a managed identity to authenticate and run operations of the cluster with other services. We also enable AAD pod identities which use the managed identity. This is in preview so you need to enable this feature by registering the EnablePodIdentityPreview feature:
az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
This takes a little while and you can check on progress by:
az feature list --namespace Microsoft.ContainerService -o table
Then install the aks-preview Azure CLI
az extension add --name aks-preview
az extension update --name aks-preview
Create a resource group if you haven't got one ready for use.
az group create --name ExampleGroup --location "East US"
- Deploy the template
- Navigate to the Azure portal, click
+ Create a resource
in the upper left corner. - Search for
Template deployment (deploy using custom templates)
and click Create. - Click on
Build your own template in the editor
- Remove the contents (json) in the editor and paste in the contents of
azuredeploy.json
- Click Save
- The template will be parsed and a UI will be shown to allow you to input parameters to provision
Alternatively use the CLI
az deployment create \
--name blockchain-aks \
--location eastus \
--template-file ./arm/azuredeploy.json \
--parameters env=dev location=eastus
- Provision Drivers
Once the deployment has completed, please run the bootstrap to provision the AAD pod identity and the CSI drivers
Use besu
or quorum
for AKS_NAMESPACE depending on which blockchain client you are using
./scripts/bootstrap.sh "AKS_RESOURCE_GROUP" "AKS_CLUSTER_NAME" "AKS_MANAGED_IDENTITY" "AKS_NAMESPACE"
- Deploy the charts
For Besu:
cd helm/dev/
# If using this monitoring chart in prod, please ensure you set some authentication mechanism in place, please refer to https://grafana.com/docs/grafana/latest/auth/grafana/
helm install monitoring ./charts/besu-monitoring --namespace besu
helm install genesis ./charts/besu-genesis --namespace besu --values ./values/genesis-besu.yml
helm install bootnode-1 ./charts/besu-node --namespace besu --values ./values/bootnode.yml
helm install bootnode-2 ./charts/besu-node --namespace besu --values ./values/bootnode.yml
helm install validator-1 ./charts/besu-node --namespace besu --values ./values/validator.yml
helm install validator-2 ./charts/besu-node --namespace besu --values ./values/validator.yml
helm install validator-3 ./charts/besu-node --namespace besu --values ./values/validator.yml
helm install validator-4 ./charts/besu-node --namespace besu --values ./values/validator.yml
# spin up a besu and orion node pair
helm install tx-1 ./charts/besu-node --namespace besu --values ./values/txnode.yml
Optionally deploy the ingress controller like so:
NOTE: Deploying the ingress rules, assumes you are connecting to the tx-1
node from section 3 above. Please update this as required to suit your requirements
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install besu-ingress ingress-nginx/ingress-nginx \
--namespace besu \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
kubectl apply -f ./ingress/ingress-rules-besu.yml
For GoQuorum:
Change directory to the charts folder ie /charts/dev
or /charts/prod
cd helm/dev/
# Please do not use this monitoring chart in prod, it needs authentication, pending close of https://github.com/ConsenSys/cakeshop/issues/86
helm install monitoring ./charts/quorum-monitoring --namespace quorum
helm install genesis ./charts/quorum-genesis --namespace quorum --values ./values/genesis-quorum.yml
# Bootnodes are only used in the **dev** charts setup
helm install bootnode-1 ./charts/quorum-node --namespace quorum --values ./values/bootnode.yml
helm install validator-1 ./charts/quorum-node --namespace quorum --values ./values/validator.yml
helm install validator-2 ./charts/quorum-node --namespace quorum --values ./values/validator.yml
helm install validator-3 ./charts/quorum-node --namespace quorum --values ./values/validator.yml
helm install validator-4 ./charts/quorum-node --namespace quorum --values ./values/validator.yml
# spin up a quorum and tessera node pair
helm install tx-1 ./charts/quorum-node --namespace quorum --values ./values/txnode.yml
Optionally deploy the ingress controller like so:
NOTE: Deploying the ingress rules, assumes you are connecting to the tx-1
node from section 3 above. Please update this as required to suit your requirements
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install quorum-ingress ingress-nginx/ingress-nginx \
--namespace quorum \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
kubectl apply -f ./ingress/ingress-rules-quorum.yml
- Once deployed, services are available as follows on the IP/ of the ingress controllers:
Monitoring (if deployed)
# For Besu's grafana address:
http://<INGRESS_IP>/d/XE4V0WGZz/besu-overview?orgId=1&refresh=10s
# For GoQuorum's cakeshop address:
http://<INGRESS_IP>
API Calls to either client
# HTTP RPC API:
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' http://<INGRESS_IP>/rpc/
# which should return (confirming that the node running the JSON-RPC service has peers):
{
"jsonrpc" : "2.0",
"id" : 1,
"result" : "0x4"
}
# HTTP GRAPHQL API:
curl -X POST -H "Content-Type: application/json" --data '{ "query": "{syncing{startingBlock currentBlock highestBlock}}"}' http://<INGRESS_IP>/graphql/
# which should return
{
"data" : {
"syncing" : null
}
}
Once you are familiar with the base setup using the dev charts, please adjust the configuration ie num of nodes, topology etc to suit your requirements.
Some things are already setup and mereley need your config eg:
-
Alerting has been setup via an Action group but requires either an email address or slack webhook to send the alerts to. There are also basic alerts created for you which will utilise the action group. The list is not exhaustive and you should add alerts based on log queries in Azure Monitor to suit your requirements. Please refer to the Azure Docs for more information
-
Monitoring via Prometheus and Grafana with the Besu dashboards is enabled, but for production use please configure Grafana with your choice of auth mechanism eg OAuth.
-
Persistent volume claims: In the prod template, the size of the claims has been set to 100Gi, if you have a storage account that you wish to use you can set that up in the storageClass and additionally lower the size (which lowers cost)
-
In the production setup, we do not overwrite or delete node keys or the like from KeyVault and the charts are designed to be fail-safe ie if you accidentally delete the deployment and rerun it you will have you existing keys to match any permissions setup that you have. You will need to manually delete anything in vault.
-
To extend your nodes and allow other nodes (in a different cluster or outside Azure), you will need to peer your VNet with the other one and ensure that the CIDR blocks don't conflict. Once done the external nodes should be able to communicate with your nodes in AKS