Welcome to the 5G-all-in-one-helm documentation main page!
Before going to use our Helm charts, you have to:
The are many solutions for the creation of a Kubernetes cluster. Feel free to visit this page to discover a part of these solutions. We recommend using Kubespray and the Calico network plugin. To enable UPF IP forwarding, change calico_allow_ip_forwarding to true in this config. And enable HELM changing helm_enabled to true in this config. If you don't dispose yet of a Kubernetes cluster, we recommend you to use Kubeadm regarding to its simplicity.
You have to install a Helm client on a host that can communicate with your Kubernetes API server.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
Refer to this link to view all possible installation methods.
- Clone the project and then.
- Go to the charts folder
charts
.
git clone https://github.com/zanattabruno/5G-all-in-one-helm.git
- A Kubernetes cluster supporting SCTP
- Kubernetes worker nodes with kernel 5.0.0-23-generic and containing gtp5g kernel module (required for the Free5GC UPF element).
- Helm3.
- A Persistent Volume (size 8Gi). If are using kubespray with local-path enabled, this step is optional.
- Kubectl (optional).
First check that the Linux kernel version on Kubernetes worker nodes is 5.0.0-23-generic
or 5.4.x
.
uname -r
Then, on each worker node, install the gtp5g kernel module.
git clone -b v0.6.6 https://github.com/free5gc/gtp5g.git
cd gtp5g
make
sudo make install
If you don't have a Persistent Volume provisioner, you can use the following commands to create a namespace for the project and a Persistent Volume within this namespace that will be consumed by MongoDB by adapting it to your implementation (you have to replace worker1
by the name of the node and /home/vagrant/kubedata
by the right directory on this node in which you want to persist the MongoDB data). This is created automatically if you are using local-path provisioning or something similar.
kubectl create ns <namespace>
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv9
labels:
project: free5gc
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
local:
path: /home/vagrant/kubedata
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker1
EOF
NOTE: you must create the folder on the right node before creating the Peristent Volume.
On the charts directory, run:
helm -n <namespace> install <free5GC-helm-release-name> ./free5gc/
If you are going to use a release name other than core, you have to change the following parameters in SMF:
- nodeID and endpoints: Where is core put your release name; Example: If your release name is free5gc change core-free5gc-upf-upf-0.upf-service to free5gc-free5gc-upf-upf-0.upf-service.
kubectl -n <namespace> get pods -l "project=free5gc"
The WEBUI can is exposed with a Kubernetes service with nodePort=30500
. So you can access it by using this url {replace-by-the-IP-of-one-of-your-cluster-nodes}:30500
.
For adding a new subscriber, please refer to the Free5GC documentation. Initially, UE is configured with Free5GC default values.
Default user is admin and password is free5gc:
Go to Menu Sunscribers > New Subscriber > Submit (With defaults values)
You can choose between two RAN open source projects to deploy UERANSIM or my5G-RANTester.
On the charts directory, run:
helm -n <namespace> install <UERANSIM-release-name> ./ueransim/
kubectl -n <namespace> get pods -l "app=ueransim"
Once the UERANSIM components created, you can access to the UE pod by running:
kubectl -n <namespace> exec -it <ue-pod-name> -- bash
Then, you can use the created TUN interface for more advanced testing. Please refer to the UERANSIM helm chart's README and check this link and the UERANSIM chart Readme for more details.
# Run this inside the container
ip address
...
5: uesimtun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.1.0.1/32 scope global uesimtun0
valid_lft forever preferred_lft forever
ping -I uesimtun0 www.google.com
traceroute -i uesimtun0 www.google.com
curl --interface uesimtun0 www.google.com
First uninstall UERAMSIM or change subscriber ID.
On the charts directory, run:
helm -n <namespace> install <UERANSIM-release-name> rantester/
kubectl -n <namespace> get pods -l "app=rantester"
Once the RANTester components created, you can access to the UE pod by running:
kubectl -n <namespace> exec -it <ue-pod-name> -- bash
Then, you can use the created TUN interface for more advanced testing. check this link for more details.
# Run this inside the container
ip address
...
5: uetun1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.1.0.1/32 scope global uetun1
valid_lft forever preferred_lft forever
ping -I uetun1 www.google.com
traceroute -i uetun1 www.google.com
curl --interface uetun1 www.google.com
Add prometheus helm repo:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
Create namespace for monitoring stack:
kubectl create namespace monitorng
Install prometheus monitoring stack:
helm install primetheus stack prometheus-community/kube-prometheus-stack
Describe UPF POD to verify pod location
kubectl describe pod core-free5gc-upf-upf-0 | grep Node:
Connect by ssh in node where UPF is running and start iperf in server node:
ssh <user>@<UPF-node>
iperf -s
In a new terminal instance (Iperf server must be running), go back to cluster administration node and connect to RAN pod, in this case RANTester pod. Start IPerf in client mode using interface created by RANTester
kubectl exec -it ran-rantester-0 bash
iiperf -B <RANTester-interface> -c <UPF-node> -i 1 -t 600
By end in a new terminal forward grafana pod port to access in your desktop:
kubectl port-forward <grafana-pod-name> -n monitoring 3000:grafana
According to the Free5GC documentation, you may sometimes need to drop the data stored in the MongoDB. To do so with our implementation, you need simply to empty the folder that was used in the Persistent Volume on the corresponding node.
sudo rm -rf {path-to-folder}/*
Or, if are using local-path
kubectl get pvc
kubectl delete pvc datadir-mongodb-0
And reinstall core and RAN helms.
This may occur because of ipv4.ip_forward
being disabled on the UPF POD. In fact, this functionalty is needed by the UPF as it allows him to act as a router.
To check if it is enabled, run this command on the UPF POD. The result must be 1.
cat /proc/sys/net/ipv4/ip_forward
We remind you that some CNI plugins (e.g. Flannel) allow this functionality by default, while others (.e.g. Calico) require a special configuration.