From 07e7f6f094672c6467cf256e7397878067d565a0 Mon Sep 17 00:00:00 2001 From: Ali Tariq Date: Mon, 15 Apr 2024 11:12:10 +0500 Subject: [PATCH] Removed admin docs --- ...Adding_Custom-build_packages_in_Jenkins.md | 21 -- .../docs/Booting_ubuntu22.04_riscv64.md | 78 ------ .../docs/Building_Jenkins_github_repo.md | 35 --- mkdocs_src/docs/Building_Linux_Kernel.md | 259 ------------------ mkdocs_src/docs/Building_qemu.md | 153 ----------- .../docs/Creating_Jenkins_Node_on_LXC.md | 188 ------------- .../Cross_compiling python3.8.15.md | 28 -- .../Cross_compiling_coremark.md | 51 ---- .../Cross_compiling_dhrystone.md | 27 -- .../Cross_Compiling/Cross_compiling_go.md | 40 --- .../Cross_Compiling/Cross_compiling_jdk.md | 24 -- .../Cross_compiling_ninja-build.md | 38 --- .../Cross_compiling_openssl.md | 90 ------ .../Cross_Compiling/Cross_compiling_ruby.md | 104 ------- .../Cross_Compiling/Cross_compiling_rust.md | 74 ----- mkdocs_src/docs/Cross_Compiling/Overview.md | 17 -- .../docs/Installing_ssl_certificates_new.md | 40 --- .../Integrating_prometheus_grafana.md | 41 --- .../Usage_Monitoring/Prometheus_Grafana.md | 81 ------ .../Github_PR_webhook_integration.md | 97 ------- .../Github_push_webhook.md | 117 -------- 21 files changed, 1603 deletions(-) delete mode 100755 mkdocs_src/docs/Adding_Custom-build_packages_in_Jenkins.md delete mode 100755 mkdocs_src/docs/Booting_ubuntu22.04_riscv64.md delete mode 100755 mkdocs_src/docs/Building_Jenkins_github_repo.md delete mode 100755 mkdocs_src/docs/Building_Linux_Kernel.md delete mode 100755 mkdocs_src/docs/Building_qemu.md delete mode 100755 mkdocs_src/docs/Creating_Jenkins_Node_on_LXC.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling python3.8.15.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_coremark.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_dhrystone.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_go.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_jdk.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_ninja-build.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_openssl.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_ruby.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Cross_compiling_rust.md delete mode 100755 mkdocs_src/docs/Cross_Compiling/Overview.md delete mode 100755 mkdocs_src/docs/Installing_ssl_certificates_new.md delete mode 100755 mkdocs_src/docs/Usage_Monitoring/Integrating_prometheus_grafana.md delete mode 100755 mkdocs_src/docs/Usage_Monitoring/Prometheus_Grafana.md delete mode 100755 mkdocs_src/docs/jenkins_github_integration/Github_PR_webhook_integration.md delete mode 100755 mkdocs_src/docs/jenkins_github_integration/Github_push_webhook.md diff --git a/mkdocs_src/docs/Adding_Custom-build_packages_in_Jenkins.md b/mkdocs_src/docs/Adding_Custom-build_packages_in_Jenkins.md deleted file mode 100755 index c6aebc8..0000000 --- a/mkdocs_src/docs/Adding_Custom-build_packages_in_Jenkins.md +++ /dev/null @@ -1,21 +0,0 @@ -# Adding Custom-build packages in Jenkins - -__NOTE:_ For this documentation, Ubuntu 22.04 and Jenkins version 2.371 is used._ - -While working open-source, many times one may need to build the package from source code. While working on the local machine, the built package may be accessible by adding it to the `$PATH` environment variable. But while working with Jenkins, This may not work. - -## Where to install package for Jenkins job - -In Jenkins job, the package is installed on the node (or agent) on which the specific job is being used and **NOT** on the node where the jenkins controller node is installed. For example if the Jenkins controller node is on `computer1` and slave node on which the `job` is destined to run is on `computer2` then `computer2` should have all the packages used by the `job` and not the `computer1`. - -## Adding the package to Jenkins - -Considering there is a node with name `Runner1` in jenkins on which the `job` is destined to run and that job uses a toolchain `riscv64-unknown-elf-gcc` to run properly and that toolchain is present in directory `/home/runner1/path_to_install/bin`, then the following procedure is used to add the toolchain in Jenkins. - -- Go to `Dashboard > Manage Jenkins > Nodes > Runner1`. -- Scroll down and check `Environment variables`. -- Under section `List of variables` add: - - `Name` as `PATH` - - `Value` as `$PATH:/home/runner1/path_to_install/bin`. -- Click on `Save`. -After this, the package should be available to be used. diff --git a/mkdocs_src/docs/Booting_ubuntu22.04_riscv64.md b/mkdocs_src/docs/Booting_ubuntu22.04_riscv64.md deleted file mode 100755 index fbbc13e..0000000 --- a/mkdocs_src/docs/Booting_ubuntu22.04_riscv64.md +++ /dev/null @@ -1,78 +0,0 @@ -# Booting RISC-V Ubuntu 22.04 on `qemu-system-riscv64` - -## Pre-requisites - -Following are the pre-requisites which are needed to be installed before booting ubuntu 22.04 for RISC-V on `qemu`. - -_**Note:** Make sure RISC-V GNU Toolchain is installed before proceeding_ - -1. U-boot -2. Qemu version 7.0 or greater with networking -3. Ubuntu 22.04 pre-built image for RISC-V - -### 1. Installing U-boot - -_**Note:** If you plan on installing U-boot from apt or some system repository, install the one which comes with ubuntu 22.04. Older will not work with this process. Also, be sure to checkout a latest Stable version instead of development version._ - -- Get source code of u-boot and checkout stable version using commands below. - -```shell -git clone https://github.com/qemu/u-boot.git -cd u-boot -git checkout v2022.10 -``` - -- Generate configurations for supervisor mode with following command. - -```shell -make qemu-riscv64_smode_defconfig CROSS_COMPILE=riscv64-unknown-linux-gnu- -``` - -- Execute following command to start build process. - -```shell -make CROSS_COMPILE=riscv64-unknown-linux-gnu- -``` - -- This should install `u-boot.bin` in the source directory. This file will be used later so its path must be kept remember. Here it will be refered to as `$UBOOTPATH`. - -### 2. Installing Qemu - -Qemu version 7.0 or greater should be installed with networking for ubuntu 22.04 to work. See [Installing Qemu for RISC-V](Building_qemu.md) for instructions on installing `qemu-system-riscv64`. - -### 3. Getting Ubuntu 22.04 pre-build image for RISC-V - -Ubuntu 22.04 image can be downloaded from . - -## Booting Ubuntu 22.04 Image on qemu - -In the directory where ubuntu 22.04 image is present, execute following command to boot into ubuntu 22.04 with `qemu-system-riscv64`. - -If you need more space, you can use following command. - -```shell -qemu-img resize -f raw ubuntu-22.04.1-preinstalled-server-riscv64+unmatched.img +5G -``` - -This will increase storage size of the image by 5GB. - -```shell -qemu-system-riscv64 \ --machine virt -nographic -m 2048 -smp 4 \ --kernel $UBOOTPATH/u-boot.bin \ --device virtio-net-device,netdev=eth0 -netdev user,id=eth0,hostfwd=::-: \ --drive file=ubuntu-22.04.1-preinstalled-server-riscv64+unmatched.img,format=raw,if=virtio -``` - -Here `-m` is the memory in Megabytes and `-smp` is number of cores. `-nographic` means qemu will use same terminal instance instead of opening a new window of its own (which is beneficial while running servers without gui). Whereas `hostfwd=::-:` will forward traffic going to port `` to port `VM_port`. Due to this port `` will be used to access ssh on qemu machine. - -This should boot ubuntu 22.04. But it will take a while on first start. - -On start, credentials will be as follows. - -```shell -Username: ubuntu -Password: ubuntu -``` - -After entering credentials, terminal will prompt for change of password after which ubuntu will be ready to use. diff --git a/mkdocs_src/docs/Building_Jenkins_github_repo.md b/mkdocs_src/docs/Building_Jenkins_github_repo.md deleted file mode 100755 index 5c00ad7..0000000 --- a/mkdocs_src/docs/Building_Jenkins_github_repo.md +++ /dev/null @@ -1,35 +0,0 @@ -# Building Github Repository of Jenkins - -### Linux kernel version, distribution and release at the time of build process - -**Linux Kernel**: 5.15.0-46-generic (can be checked using `uname -r` in ubuntu) -**Distribution**: Ubuntu -**Release**: focal (20.04); also works without issue on ubuntu 22.04 (release can be checked using `lsb_release -a` in ubuntu) - -## Cloning the github repository - -First clone the repository using the command below (here it is assumed to be cloned at user's home directory: ~/) -`git clone https://github.com/jenkinsci/jenkins.git` - -## Resolving the dependencies - -After cloning the repository at ~/, the file `CONTRIBUTING.md` should be available in ~/jenkins/. This file contains all the information to resolve the dependencies and building the repository. -Some notable dependencies are Java Development Kit (JDK), Apache Maven (latest version will be preferable) and git. Running Following command will resolve the mentioned dependencies. - -```shell -sudo apt update && sudo apt install default-jdk default-jre maven git -y -``` - -## Building Jenkins using maven on linux - -For having the jenkins build up as fast as possible, following command can be used in ~/jenkins/. -`mvn -am -pl war,bom -Pquick-build clean install` - -## Executing Jenkins - -After the above commands successfully completes execution, `jenkins.war` should be present in `~/jenkins/war/target` and can be executed to run at port 8080 on localhost using following command. -`java -jar ~/jenkins/war/target/jenkins.war --httpPort=8080 #Considering jenkins repo is cloned at ~/` - -**After this process jenkins UI can be accessed using in a browser and a password will be shown on terminal to login to jenkins first time** -After this, Jenkins UI will go through a very simple post installation process which one can configure according to his needs. - \ No newline at end of file diff --git a/mkdocs_src/docs/Building_Linux_Kernel.md b/mkdocs_src/docs/Building_Linux_Kernel.md deleted file mode 100755 index a445df5..0000000 --- a/mkdocs_src/docs/Building_Linux_Kernel.md +++ /dev/null @@ -1,259 +0,0 @@ -# Building a RISCV Linux kernel and booting it in QEMU inside LXC container - -This documentation covers how to build a linux kernel with RISCV linux toolchain inside an un-privileged LXC container and then boot it on qemu. -Doing this process on privileged lxc container makes life easier, but privileged containers always have security loop holes. For instance, their root id is mapped to root id of host machine. On the other hand un-privileged containers are the safest (see [link](https://linuxcontainers.org/lxc/security/)). - -## Machine's and LXC Container's Operating System Specifications - -At the time of creating this documentation, following is the specification of host machine and operating system. - -- **Host Machine:** Ubuntu focal (20.04) 64-bit. -- **LXC Container:** Ubuntu jammy (22.04) 64-bit. -- LXC Container is unprivileged with non-sudo user. - -_**NOTE:** Throughout this documentation, name of the lxc container will be `qemu_container` or `qemucontainer` with non-sudo user as `qemu-user` which is running on ubuntu 22.04 and host machine x86. Do not confuse the name with assumption that it is booting on qemu emulator. It is just a naming convention._ - -## Pre-requisites - -Following programs may also have their own pre-requisites. - -1. **Git:** For cloning repositories of following programs. Install it with `[sudo] apt install git`. -2. **TMUX:** For convenience of multiple terminals. Install it using `[sudo] apt install tmux` -3. **RISCV GNU toolchain built as linux:** For compiling the linux kernel. -4. **Busybox:** For Generating the binaries for linux kernel boot. -5. **QEMU Emulator:** For booting the linux kernel -6. **Linux Kernel (latest version which is used at point of writing this documentation is `6.0.0`)** - -The working directory inside lxc container for all of this documentation will be `~/riscv-linux` or `/home/qemu-user/riscv-linux`. - -_**NOTE:** Busybox will not be built inside the lxc container rather it will be built in (any) host linux machine with sudo privileges._ - -### 3. RISCV GNU Toolchain - -- Log in the lxc container to the non-sudo user (here SSH is used to log in). -- Install the prerequisites for building RISCV GNU TOOLCHAIN inside lxc container with `root` user using command below. - -```shell -apt-get install autoconf automake autotools-dev curl python3 libmpc-dev libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo gperf libtool patchutils bc zlib1g-dev libexpat-dev libncurses-dev -``` - -- Clone the GNU toolchain using the command below. - -```shell -git clone https://github.com/riscv-collab/riscv-gnu-toolchain.git -``` - -- Create a directory in which the RISCV GNU toolchain is desired to be installed (here it will be `/home/qemu-user/riscv-linux/riscv-gnu-installed`) -- Execute following command inside cloned repository with `--prefix` as the absolute directory path to where the RISCV toolchain is to be installed - -```shell -./configure --prefix=/home/qemu-user/riscv-linux/riscv-gnu-installed -``` - -- Execute following inside cloned repository (execution of this command will take a while to complete) - -```shell -make linux -j$(nproc) # 'nproc' is the command used to determine the number of processors in machine so that 'make' can use parallelism. -``` - -- After the execution of command is complete, add `bin` directory created inside the riscv installation directory to the `$PATH` and add the expression to `.bashrc`. According to this documentation, following expression will be added to `.bashrc`. - -`PATH="/home/qemu-user/riscv-linux/riscv-gnu-installed/bin:$PATH"` - -- Check if the toolchain is installed by executing following commands. - -```shell -exec $SHELL -riscv64-unknown-linux-gnu-gcc -``` - -Expected Output: - -```shell -fatal error: no input files -compilation terminated. -``` - -- Now RISCV linux toolchain is ready ! - -### 4. Busybox - -Busybox is the package for creating linux `binutils` and set of directories for linux to boot into. Busybox will be installed in host machine instead of lxc container. The reason for this is linux kernel requires `block oriented device` and `character oriented device` for it to boot. Those devices can be created using `mknod` command which can only be created inside a host machine with sudo privileges. Busybox will be used in creating initial ram disk file (in gz format) which is used to boot kernel. It does not matter at which operating system or on which machine this file is created. But the machine on which it is compiled, must also have RISCV GNU toolchain installed above. - -- Clone Busybox using the command below. - -```shell -git clone https://git.busybox.net/busybox -``` - -- Navigate to cloned directory. - -```shell -cd busybox -``` - -- Before building busybox, we need to produce a configuration (.config file) for busybox. It is better to apply default configurations and then change only those which are desired. - -```shell -make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- defconfig -``` - -- After the command is executed, a `.config` file will be present in the busybox cloned directory. -- Now, an additional option is to be enabled which enables busybox to build the libraries in the executable instead of separate shared libraries. For this purpose, execute the following command to access the configuration menu. Then go to `Settings` menu by pressing enter and from there, enable `[ ] Build static binary (no shared libs)` by pressing space. After the option is enabled, exit by pressing `esc` twice two times and press yes to the prompt about file saving. - -```shell -make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- menuconfig -``` - -![Selection_005](../doc_images/Selection_005.png) -![Selection_007](../doc_images/Selection_007.png) - -- Now that configuration is complete, build busybox by executing following command. - -```shell -make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- -j$(nproc) -``` - -- Execute following command which will produce all the basic linux utilities in `_install` directory in `busybox` repo directory. - -```shell -make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- -j$(nproc) install -``` - -- Navigate to `_install` directory and create `dev` directory. - -```shell -cd _install -mkdir dev -``` - -- Now, a linux `console` and a `ram` devices are to be created inside `dev` directory. A fact to understand here is that, every device is a file in linux but they are special kind of files. A detail of these devices can be found [here](https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/dev.html). In this documentation `mknod` command is used to create these devices. - -```shell -sudo mknod dev/console c 5 1 -sudo mknod dev/ram b 1 0 -``` - -- After executing above commands, following files with names `ram` and `console` will be created as shown in the image below. - -![Screenshot from 2022-10-02 21-45-00](<../doc_images/Screenshot from 2022-10-02 21-45-00.png>) - -- Now an `init` file is needed because linux kernel does not boot itself, it rather searches for `init` file in the directories (read linux kernel messages during build procedure). `init` file contains commands to mount some directories during boot (more information can be found [here](https://tldp.org/LDP/abs/html/systemdirs.html)). In `busybox/_install`, create a file with following contents (be sure to make it executable with `chmod +x init`). - -```shell -vim init -``` - -```shell -#!/bin/sh -echo "### INIT SCRIPT ###" -mkdir /proc /sys /tmp -mount -t proc none /proc #For processes -mount -t sysfs none /sys #For all the devices on the machine -mount -t tmpfs none /tmp #For virtual memory -echo -e "\nThis boot took $(cut -d' ' -f1 /proc/uptime) seconds\n" -exec /bin/sh -``` - -- Now that all the files are ready for linux to boot into, pack them in `cpio` format and then to `gz` format. It is because `cpio` format is in "Early userspace support" in linux kernel (see [link](https://docs.kernel.org/driver-api/early-userspace/early_userspace_support.html)) whereas `gz` format is needed because it is one of the formats needed by qemu emulator. Following command produces a qemu-compatible initramfs file for linux kernel to boot in. - -```shell -find -print0 | cpio -0oH newc | gzip -9 > ../initramfs.cpio.gz -``` - -- **Command Details:** - - `find -print0` separates the file names with null character - - `cpio -0oH newc` produces an archive file in `newc` format - - `gzip -9` creates a gz format zip file. `-9` represents the best compression level at the slowest speed - - `../initramfs.cpio.gz` represents the output file which is created in parent directory to present working directory. -- At this point, our work with busybox is done. -- Copy the produced file into the lxc container (use `scp` command if it is on ssh). - -### 5. QEMU Emulator - -- Install the pre-requisites of qemu emulator on lxc container with root user with following command (see [link](https://wiki.qemu.org/Hosts/Linux)). - -```shell -apt-get install git libglib2.0-dev libfdt-dev libpixman-1-dev zlib1g-dev ninja-build -``` - -- Clone the QEMU Emulator repository using the command below with `root` user: - -```shell -git clone https://github.com/qemu/qemu.git -``` - -- Build QEMU for RISCV with `root` user using commands below (see [link](https://wiki.qemu.org/Documentation/Platforms/RISCV)). - -```shell -./configure --target-list=riscv64-softmmu -make -j$(nproc) -``` - -- For system-wide installation of QEMU Emulator, run following command with `root` user. - -```shell -make -j$(nproc) install -``` - -- After execution of this command, work with QEMU Emulator is done and its commands can be accessed anywhere. - -## 6. Linux Kernel - -- Clone the linux kernel from Linus Torvalds' repository using the command below. - -```shell -git clone https://github.com/torvalds/linux.git -``` - -- Navigate to cloned repository. - -```shell -cd linux -``` - -## Building Linux kernel with `riscv64-unknown-linux-gnu-gcc` - -- Before building linux kernel with RISCV toolchain a configuration file (.config) must be produced in its directory. First, produce a file with default configurations, then change configurations according to needs. - -```shell -make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- defconfig -make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- menuconfig -``` - -- Above command will open configuration menu. Enter `General setup`, scroll down and enable `Initial RAM filesystem and RAM disk (initramfs/initrd) support` using `Space` key. Then enter `() Initramfs source file(s)` and here put the absolute path to `initramfs.cpio.gz` file which was just created using busybox. For this documentation, it is `/home/qemu-user/riscv-linux/initramfs.cpio.gz`. Double-press `esc` and save the file. - -![Selection_008](../doc_images/Selection_008.png) - -![Selection_009](../doc_images/Selection_009.png) - -![Selection_011](../doc_images/Selection_011.png) - -- Now linux kernel is ready to be compiled with `riscv64-linux-gnu-gcc` toolchain. So execute the command below. - -```shell -make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- -j$(nproc) -``` - -- If the above command successfully executed without any errors, there must be `Kernel: arch/riscv/boot/Image.gz is ready` printed on terminal. On newer linux kernels, it might be scrolled up a little, use `Ctrl-shift-f` to find it. - -## Booting into the linux kernel using QEMU emulator - -Now that everything is ready, execute the following command in `linux/arch/riscv/boot/` directory to boot linux on `sifive's unleashed`. - -```shell -qemu-system-riscv64 -kernel Image -machine sifive_u -nographic -``` - -- **Command Details** - - `qemu-system-riscv64` is the qemu built for riscv64 - - `-kernel` takes the image produced by linux kernel compilation and is present in `linux/arch/riscv/boot/` directory. - - `-machine` takes one of the machine names as arguments available in `qemu-system-riscv64`. Available machines can be listed on terminal using command `qemu-system-riscv64 -machine help`. - - `-nographic` restricts the use of GUI (which is a better option considering lxc container does not support gtk initialization). -- If everything goes on right, the kernel will boot successfully as shown in the picture below. -![Selection_012](../doc_images/Selection_012.png) - -## Reference links - -1. -2. diff --git a/mkdocs_src/docs/Building_qemu.md b/mkdocs_src/docs/Building_qemu.md deleted file mode 100755 index b5496cc..0000000 --- a/mkdocs_src/docs/Building_qemu.md +++ /dev/null @@ -1,153 +0,0 @@ -# Installing `QEMU` for emulating riscv64 - -`QEMU` is an open-source emulator. It can be used to emulate different architectures on a single machine. In RISC-V CI there are various programs which run on RISC-V architecture. But instead of porting them to a dedicated board of riscv architecture, they can be run readily on qemu emulator. Here two types of `QEMU` emulators will be used for RISC-V applications: - -1. **qemu-system-riscv64:** It can be used to load a complete linux operating system image. -2. **qemu-riscv64:** It can be used to execute program's binary directly without need of a complete operating system. - -## Installing Pre-requisites - -Execute the following command to install the pre-requisites for installing `qemu` on ubuntu 22.04 (jammy) - -```shell -sudo apt-get install meson git libglib2.0-dev libfdt-dev libpixman-1-dev zlib1g-dev ninja-build -``` - -`qemu-slirp` is important for enabling user-level networking with `qemu-system-riscv64` `qemu-system-riscv64` while loading image of server installation of ubuntu. So it needs to be installed first. - -Get the source code of `qemu-slirp` using following command: - -```shell -git clone https://github.com/openSUSE/qemu-slirp.git -``` - -Then Execute following commands to install `slirp` in meson, which can later be used by qemu during build. - -```shell -meson build -ninja -C build install -``` - -**Note:** Make sure you have `riscv64-unknown-linux-gnu` toolchain installed for compiling program and executing them on qemu later. - -There are some optional dependencies which one can download, but they are actually not needed for installing qemu and make it work. - -## Installing `qemu-system-riscv64` - -### What is `qemu-system-riscv64` - -`qemu-system-riscv64` is qemu executable program. It can load a complete linux distribution. It cannot take program's executable binary as argument and run it without a dedicated linux distribution. - -### Installing `qemu-system-riscv64` on ubuntu - -- Get source code of `qemu` from github using the command below - -```shell -git clone https://github.com/qemu/qemu.git -``` - -- Configure qemu for `riscv64-softmmu` with following command (replace $PREFIX with valid location of installation). - -```shell -./configure --prefix=$PREFIX --target-list=riscv64-linux-user,riscv64-softmmu --enable-slirp -``` - -- Execute following command to start the build. - -```shell -make -``` - -- Execute following command to install the binaries at `$PREFIX` location. - -```shell -make install -``` - -_**Note:** After the installation with `slirp` following error can be encountered on some systems_. - -```shell -qemu-system-riscv64: symbol lookup error: qemu-system-riscv64: undefined symbol: slirp_new, version SLIRP_4.0 -``` - -_**Solution:** This can be solved by executing following command in source directory of qemu (which is cloned from github)._ - -```shell -[sudo] ldconfig -``` - -### Testing `qemu-system-riscv64` - -`qemu-system-riscv64` can only be tested by booting a linux operating system. See [Booting RISC-V Ubuntu 22.04 on `qemu-system-riscv64`](Booting_ubuntu22.04_riscv64.md). - -## Installing `qemu-riscv64` - -### What is `qemu-riscv64` - -`qemu-riscv64` is qemu executable program. But instead of porting a complete operating system (like `qemu-system-riscv64`), it can readily execute binaries. -Throughout cross-compiling section, `qemu-riscv64` will be used with `linux-user` and executable of every program can be tested on qemu (e.g. python, ruby etc.). - -### Installing `qemu-riscv64` on ubuntu 22.04 - -- Get source code of `qemu` using the command below - -```shell -git clone https://github.com/qemu/qemu.git -``` - -- Use following command in the root directory of repository to configure `qemu` for `riscv64-linux-user` - -```shell -./configure --target-list=riscv64-linux-user --prefix=$PREFIX # Replace $PREFIX with a valid location to install at -``` - -**Note:** If this is not your architecture/platform, you can see a list of available platform/architecture by executing following command. - -```shell -./configure --help -``` - -- Use the following command to start the build process - -```shell -make -j$(nproc) -``` - -- After the builld is complete without any error, use the following command to install binaries at `$PREFIX` location - -```shell -make install -``` - -- Add the `$PREFIX/bin` to `$PATH` variable so that it may be recognized as a command. - -- Using this method causes the `qemu-riscv64` to have an issue with sysroot. It starts searching for libraries in the root folder of the machine which is based on `x86_64-linux-gnu`. A simple workaround is to give path of the `sysroot/` folder where `riscv64-unknown-linux-gnu` toolchain is installed. Here that directory will be denoted as `$RISCV_SYSROOT` - -### Testing `qemu-riscv64` - -- Create a C file in your favorite editor or by using the commands below: - -```shell -echo "#include" > helloworld.c -echo "int main(){" >> helloworld.c -echo "printf("Hello World !");" >> helloworld.c -echo '}' >> helloworld.c -``` - -- Execute following command to compile C program with `riscv gnu toolchain` - -```shell -riscv64-unknown-linux-gnu-gcc helloworld.c -o helloworld -``` - -- Execute following command to execute the compiled binary on `qemu-riscv64` - -```shell -qemu-riscv64 -L $RISCV_SYSROOT ./helloworld -``` - -- If everything went right, following output will be shown. - -```shell -Hello World ! -``` diff --git a/mkdocs_src/docs/Creating_Jenkins_Node_on_LXC.md b/mkdocs_src/docs/Creating_Jenkins_Node_on_LXC.md deleted file mode 100755 index c957697..0000000 --- a/mkdocs_src/docs/Creating_Jenkins_Node_on_LXC.md +++ /dev/null @@ -1,188 +0,0 @@ -# Creating a Jenkins Node on LXC Container - -## What is a container - -A container is a virtualization method for isolating the applications (or even operating systems) from each other. - -## Why do we need a container for Jenkins node - -In Jenkins, node is a location where our jobs run. One user can use one node for all of his processes and multiple users may also use one node for all of their processes. In Jenkins freestyle project, we can use bash shell or windows command shell due to which there is a possibility to navigate anywhere in the server machine. This possibility can lead to various security and integrity issues for server administrators and also for other users using that webserver. So one must isolate each node and allocate each node to each user separately. - -## What is LXC Container - -LXC stands for Linux Containers. LXC is a package for linux operating systems and provides linux users with containers which may contain a whole linux operating system while also being lightweight than a virtual machine. More information regarding LXC can be found at - - -On ubuntu 20.04 one can install LXC using command -`[sudo] apt-get install lxc` - -## Creating a container with LXC - -**_NOTE: Throughout this document, the name of the container will be `my-container` and the name of the user will be `user1`. So wherever my-container is written, one may change the name to whatever one wants to give it._** - -## Pre-requisites - -Before proceeding, it is important to mention that at the point of writing this document following are the specifications for linux kernel and distribution: - -**Linux Kernel**: 5.15.0-46-generic (can be checked on ubuntu by command `uname -r`) -**Distribution**: Ubuntu focal (20.04) (can be checked on ubuntu by command `lsb_release -a`) - -By default linux users are not allowed to create any network device on the machine. For doing that, one must add `uid` and `gid` in the `/etc/lxc/lxc-usernet`. The uid and gid of the user you want to use can be found in the files `/etc/uid` and `/etc/gid` respectively. - -After getting the `gid` and `uid` of the user you want to allow for creating the network devices, one must go to `/etc/lxc/lxc-usernet` and add the `uid` and `gid` in the following format: - -` veth lxcbr0 10` -e.g. -`jenkins veth lxcbr0 10` - -In above example `jenkins` is the username, `veth` is the command used for creating the bridges between virtual network devices and physical network devices (you will be able to see that ethernet device in our container will be `lxcbr0`) and `lxbr0` is the network device we want to create. ‘10’ represents the number of devices we want to create using our specified user. - -According lxc documentation, in ubuntu 20.04 an additional command is required before creating lxc container: -`export DOWNLOAD_KEYSERVER="hkp://keyserver.ubuntu.com"` - -## Creating image - -After this, one can create container using following command - -`systemd-run --unit=my-unit --user --scope -p "Delegate=yes" -- lxc-create -t download -n my-container` - -This runs lxc container with unit name `my-unit`, container name `my-container` and delegates a control group (also known as the `cgroup`) which is needed for resource allocation for processes in container. - -This will output the list of available linux distributions in which one may want to run the container and will prompt for Distribution as shown in the following image. - -![unnamed (4)](<../doc_images/unnamed (4).png>) - -After selecting the suitable distribution, release and architecture (which is also mentioned in the table), the container may be created as shown in the image below: - -![unnamed (5)](<../doc_images/unnamed (5).png>) - -Next thing is to start a container which will change its state from STOPPED to RUNNING using the following command. - -`lxc-start -n my-container` - -Above command will have no output if it succeeds. -The state of the container can be checked using the following command. - -`lxc-info -n my-container` - -After starting the container, its state will be set as running and is just like turning a linux machine ON. - -From this point onwards, if one wants to use the machine in the terminal then use the following command and this will switch the terminal to the root of the container. - -`lxc-attach my-container` - -![unnamed (6)](<../doc_images/unnamed (6).png>) - -Now the container is ready to be used and is completely isolated from the host machine. - -## Using SSH to access container with username and password - -The above mentioned method can be used to attach the host machine terminal to the container and this can be used to access the container. But if one wants to access the machine remotely then one possible and well-known method will be to configure and use SSH on the container. - -As it is an out of the box linux distro and only the root user is present, so first create another user using the following command and then manage its permissions for `/home folder`. - -```shell -#Considering you remain the root user for execution of all the following #commands - -useradd user1 -cd /home -mkdir user1 #Creating home directory for user1 -chown user1:user1 user1 #Giving ownership of home directory to new user - -#For adding the same shell and bashrc configurations for new user, use #following -#command, otherwise the shell will be very basic for new user and will be #very inconvenient to use. - -usermod -s /bin/bash user1 -``` - -For the sake of simplicity of this document the name used for the new user is `user1` here. -(You may want to set the password for `user1` by executing the `passwd` command in root.) - -At this point `user1` is not in sudoers. For adding it to the sudoers, it must be added in the sudo group which can be done by using the following command. - -`usermod -a -G sudo user1` - -Now switch to the user1 using the following command. - -`su - user` - -Now install openssh-server for configuring the ssh on user1. - -`sudo apt install openssh-server` - -After that one must find the ip of the container we are using, for this either run following command while in container with user1, - -```shell -sudo apt-install net-tools - -#Because ifconfig is part of net-tools which are by default not installed on new #container - -Ifconfig -a -``` - -![unnamed (7)](<../doc_images/unnamed (7).png>) - -OR open a new terminal in the host machine and execute the following command. - -`lxc-info my-container -iH` - -![unnamed (8)](<../doc_images/unnamed (8).png>) - -So, the ip of the container is `10.0.3.127`. -The command for establishing an ssh connection to a remote machine is mentioned below and it will ask for the password of the remote machine which is actually the container in our case. - -`ssh user1@10.0.3.127` - -After entering the password, terminal will switch to the container’s user1 as can be seen in the following image. - -![unnamed (9)](<../doc_images/unnamed (9).png>) - -## Using SSH to access Jenkins agents on the container - -First install some initial dependencies (git, jdk, jre) on the containers for running agents on the container. - -```shell -sudo apt update -sudo apt install default-jdk default-jre git maven -``` - -Now login to jenkins with administrator privileges and create a node in it from `Dashboard > Manage Jenkins > Nodes` and press `+ New Node`. Enter a name for the node and select the desirable node type. -For this documentation, the node will be a permanent type and the name will be `temp_node`. - -![unnamed (10)](<../doc_images/unnamed (10).png>) - -After this click on `Create` which will display the configuration page of the node. - -1. Write the description of the node as desired. -2. Number of executors means number of threads running at a time (it will be better to set it to the number of processors present on the machine which is running this node). -3. Remote root directory will be the directory where the jobs will run by default on the node. In our case this will be a specified directory inside the container. -4. Labels indicate that this node will run only when a job with specified labels is run, otherwise this node will not be used (also depends on the usage method in the next option). If the purpose is to use the node by default for every job, then leave it empty. -5. Select a desired usage option. -6. In launch methods, select “Launch agents via SSH” - - In “Host” enter the ip address of the container which is 10.0.3.127 in our case. - - In “Credentials”, click Add and this will open another sub dialog for entering credentials information. - - Select the kind as “Username wih password”. - - Leave other options as is and write the username and password of the container user, in our case the username will be user1 and password will be the password which was set for user1. - - “ID” and “Description” are optional. - - Click on “Add”. - - Now as the credentials are added, click on the dropdown menu and select the username you just added. In our case it is user1 as the username added was user1. -7. After this rest of the options need not to be changed if this node is going to be a default node. -8. Click on “Save”. -After complete setup, the configuration for this node will look something like this. - -![unnamed (11)](<../doc_images/unnamed (11).png>) - -![unnamed (12)](<../doc_images/unnamed (12).png>) - -![unnamed (13)](<../doc_images/unnamed (13).png>) - -If no issue is encountered during this whole setup, jenkins will take us to the log and after sometime (when the ssh connection is established) we can see “Agent successfully connected and online” at the bottom of the log as can be seen in the screenshot below. - -![unnamed (14)](<../doc_images/unnamed (14).png>) - -**After this point, node will be able to run jobs from the container directory.** - -## Reference Links - -Documentation for LXC containers can be found at: -Details regarding Jenkins ssh agents can be found at: diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling python3.8.15.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling python3.8.15.md deleted file mode 100755 index b1a2c06..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling python3.8.15.md +++ /dev/null @@ -1,28 +0,0 @@ -# Cross compiling python3.8.15 for `riscv64` - -This document will include how to build python3.8.15 for riscv64 architecture while using build machine as x86_64. - -## Building Python - -Get the python source code of python 3.8.15 in form of tarball from [this](https://www.python.org/downloads/source/) link. - -Once a tarball is obtained, extract it and use following command in its root folder to configure it. - -```shell -# Here $PREFIX is the directory where the binaries are desired to be installed. -./configure --host=riscv64-unknown-linux-gnu --build=x86_64-linux-gnu --prefix="$PREFIX" --disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no -``` - -After the above command is executed, use following command to start the build. - -```shell -make -j$(nproc) -``` - -Now to install binaries on the location where PREFIX, use following command. - -```shell -make install -``` - -This will install binaries for python3.8.15 in the $PREFIX directory and it can be checked using `qemu-riscv64`. diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_coremark.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_coremark.md deleted file mode 100755 index 38b150a..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_coremark.md +++ /dev/null @@ -1,51 +0,0 @@ -# Cross-compiling Coremark - -Coremark is another benchmarking tool. Here coremark will be cross-compiled for `riscv64-unknown-linux-gnu` and will be run on `qemu-riscv64`. The github source code commit at the time of build is `eefc986ebd3452d6adde22eafaff3e5c859f29e4` and branch is `main`. - -## Getting the source code - -Execute the following command to get the source code of coremark. - -```shell -git clone https://github.com/eembc/coremark.git -``` - -## Tweaking source files for `riscv64-unknown-linux-gnu` - -At the time of this documentation, linux is being used for this test. First of all `core_portme.mak` will be changed. - -1. Navigate to `linux/` directory in source repository. -2. Open `core_portme.mak`. Here a single line will be used to include `core_portme.mak` from `posix` directory. So, navigate to `posix/` directory in source folder -3. Open `core_portme.mak` in `posix/` directory and do the following changes to variables here. - 1. Change `CC?=cc` to `CC=riscv64-unknown-linux-gnu-gcc`. - 2. Scroll down and change `EXE=.exe` to `EXE=` (it should be blank). - 3. Scroll down and change `LD=gcc` to `LD=riscv64-unknown-linux-gnu-ld`. - 4. As we are using `qemu-riscv64`, so change `RUN=` to `RUN=qemu-riscv64 -L "$$RISCV_SYSROOT"` - 5. Save changes and exit this file. -4. Now open `core_portme.h` and change `#define USE_CLOCK 0` to `#define USE_CLOCK 1` and save. -5. Navigate to source directory of repository and execute following command - -```shell -make PORT_DIR=linux/ -``` - -If everything went right, the output result will be stored in `run1.log` and `run2.log` and will be of the form as shown below. - -```shell -2K validation run parameters for coremark. -CoreMark Size : 666 -Total ticks : 12368459 -Total time (secs): 12.368459 -Iterations/Sec : 8893.589735 -Iterations : 110000 -Compiler version : GCC12.2.0 -Compiler flags : -O2 -DPERFORMANCE_RUN=1 -lrt -Memory location : Please put data memory location here - (e.g. code in flash, data on heap etc) -seedcrc : 0x18f2 -[0]crclist : 0xe3c1 -[0]crcmatrix : 0x0747 -[0]crcstate : 0x8d84 -[0]crcfinal : 0x0956 -Correct operation validated. See README.md for run and reporting rules. -``` \ No newline at end of file diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_dhrystone.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_dhrystone.md deleted file mode 100755 index 61648d3..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_dhrystone.md +++ /dev/null @@ -1,27 +0,0 @@ -# Cross-compiling `dhrystone` - -Dhrystone is a benchmarking tool. Here `dhrystone` will be compiled from source and run on `qemu-riscv64`. -Dhrystone is comparable to VAX 11/780 in a way that VAX 11/780 achieves 1757 dhrystones per second which is also referred to as 1 MIPS of VAX11/780. So number of dhrystones per seconds are obtained and then divided by 1757 to get MIPS. See this [link](https://wiki.cdot.senecacollege.ca/wiki/Dhrystone_howto) for more details. - -## Cross-compiling for `riscv64-unknown-linux-gnu` - -- Get the source code of `dhrystone` using the command below - -```shell -git clone https://github.com/sifive/benchmark-dhrystone.git -``` - -- Navigate to root directory of repository and compile program with `riscv64-unknown-linux-gnu-gcc` instead of native `gcc` - -```shell -cd benchmark-dhrystone -make CC=riscv64-unknown-linux-gnu-gcc -``` - -- Execute following command to execute `dhrystone` binary - -```shell -qemu-riscv64 -L $RISCV_SYSROOT ./dhrystone -``` - -**Note:** You may want to tweak `Makefile` and `dhry_1.c` a little bit to get the correct results. diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_go.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_go.md deleted file mode 100755 index fcaa8d7..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_go.md +++ /dev/null @@ -1,40 +0,0 @@ -# Cross compiling `Go` - -Go currenlty has support for riscv64 architecture. This document will cover how to compile go on x86 for riscv64 architecture. - -_**Note:** Right now, go can be cross-compiled on x86 but it cannot be executed on x86 with qemu-riscv64 because it needs to execute `goroot/pkg/tool/compile` and qemu-riscv64 can only execute one binary at a time._ - -## Pre Requisites - -On ubuntu, following pre-requisites should be installed. - -- Snap -- qemu-riscv64 (linux-user) - -Go source code is written in go. That means a go toolchain is needed to compile the source code (see [link](https://go.dev/doc/install/source) ). As this document cross-compiles the code, first of all install a go language on build machine. Use following command on ubuntu to install a go language compiler. - -```shell -sudo snap install go --classic -``` - -After this, get the source code of `go` from following command. - -```shell -git clone https://go.googlesource.com/go goroot -``` - -Now set the environment variable as follows. - -```shell -export GOROOT_BOOTSTRAP=/snap -export GOARCH=riscv64 -export GOOS=linux -``` - -Navigate to `goroot/src` and execute following command. - -```shell -./all.bash -``` - -After this, an executable file will be located in folder `goroot/bin/linux_riscv64`. diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_jdk.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_jdk.md deleted file mode 100755 index bd46619..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_jdk.md +++ /dev/null @@ -1,24 +0,0 @@ -# Cross-compiling JDK for `riscv64-unknown-linux-gnu` - -JDK is abbreviation of 'Java Development Kit'. It is used for compiling and executing java-based programs and applications. - -All the information regarding building jdk and relevant dependencies are given at - - -Make sure to have `riscv64-unknown-linux-gnu` toolchain installed on machine. - -## Getting source code - -The source code of jdk can be obtained using command below - -```shell -git clone https://github.com/openjdk/jdk.git -``` - -## Building the source code for - -- First of all configure the source code using the command below - -```shell -bash configure --host=riscv64-unknown-linux-gnu --build=x86_64-linux-gnu --target=riscv64-unknown-linux-gnu --prefix=/home/ali/custom_installed/RISCV/jdk/967a28c3d85fdde6d5eb48aa0edd8f7597772469 --with-cups=/home/ali/custom_installed/cups --with-fontconfig=/home/ali/custom_installed/fontconfig/e291fda7d42e5d64379555097a066d9c2c4efce3 --x-includes=/usr/include --x-lib=/usr/lib -``` \ No newline at end of file diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_ninja-build.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_ninja-build.md deleted file mode 100755 index 722c5ee..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_ninja-build.md +++ /dev/null @@ -1,38 +0,0 @@ -# Cross-compiling ninja-build for RISC-V - -Ninja is a small build system with a focus on speed. It differs from other build systems in two major respects: it is designed to have its input files generated by a higher-level build system, and it is designed to run builds as fast as possible. - -This document will cover how to compile ninja to work on 64-bit RISC-V architecture. - -## Getting source code - -Use following command to get source code of ninja-build and navigate to source directory. - -```shell -git clone git://github.com/ninja-build/ninja.git -cd ninja -git checkout release -``` - -Create following cmake file inside the root directory of ninja-build repository. - -```shell -# the name of the target operating system -set(CMAKE_SYSTEM_NAME Linux) - -# which compilers to use for C and C++ -set(CMAKE_C_COMPILER riscv64-unknown-linux-gnu-gcc) -set(CMAKE_CXX_COMPILER riscv64-unknown-linux-gnu-g++) - -# where is the target environment located -set(CMAKE_FIND_ROOT_PATH /softwares/RISCV/riscv64-unknown-linux-gnu/50c1b734e889e5cbb88bcc7f14975ea9a1d0b936/sysroot - ) - -# adjust the default behavior of the FIND_XXX() commands: -# search programs in the host environment -set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER) - -# search headers and libraries in the target environment -set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY) -set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY) -``` \ No newline at end of file diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_openssl.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_openssl.md deleted file mode 100755 index 91f002e..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_openssl.md +++ /dev/null @@ -1,90 +0,0 @@ -# Cross-compiling openssl - -## What is `openssl` - -Openssl is a software library which is used inside many high level languages (e.g. Python, Ruby etc.) and also in linux itself. It is used for security and other cryptography applications. - -## Building openssl v1.0.1 for `riscv64` architecture - -Following are the steps used to build openssl for riscv64 architecture. - -- Get the source code of `openssl` and navigate inside the cloned repository using the commands below - -```shell -git clone https://github.com/openssl/openssl.git -cd openssl -``` - -- Configure openssl for building. In openssl there are some `os/compiler` choices which one can use to build for his architecture. But in our case there is no support for building with riscv64. As it is written in C language, so it can be compiled whether or not the support is given. Use the following command to generate a `Makefile` for `linux-generic64` - -```shell -./Configure linux-generic64 --prefix=$PREFIX # Prefix is the directory where you want binaries to be installed at the end -``` - -- After the above command is successfully completed, run the following command to build openssl using `riscv64-unknown-linux-gnu-gcc` compiler instead of native gcc compiler. - -```shell -make -j$(nproc) CC=riscv64-unknown-linux-gnu-gcc -``` - -- Install the binaries in the specified `--prefix` using the command below - -```shell -make -j$(nproc) install CC=riscv64-unknown-linux-gnu-gcc -``` - -The installed binary can be tested on `qemu-riscv64` using the command below: - -```shell -qemu-riscv64 -L $RISCV_SYSROOT ./openssl -``` - -Here $RISCV_SYSROOT is the `sysroot/` folder located inside the riscv gnu toolchain installed directory. - -The above mentioned command will start the openssl console if everything went right. - -**Note:** Do not change the directory of openssl or rename it, as it some files inside it are inferred with absolute paths, changing the directory or renaming it will cause other packages to not configure openssl for them when cross-compiling. - -## Building openssl v1.1.1r for `riscv64` architecture - -In openssl v1.1.1r, there is a support for `linux64-riscv64`. Following is the procedure for cross-compilation. - -- Checkout the `v1.1.1r` of openssl by executing following command in the repository directory. - -```shell -git checkout OpenSSL_1_1_1r -``` - -- Execute following command to configure for riscv64 architecture and generate a `Makefile`. - -```shell -./Configure linux64-riscv64 --prefix=$PREFIX # Replace $PREFIX with where you want to install binaries -``` - -- Execute following command to cross-compile for `riscv64-unknown-linux-gnu`. - -```shell -make CROSS_COMPILE=riscv64-unknown-linux-gnu- -``` - -- Then install binaries at `$PREFIX` with following command. - -```shell -make install -``` - -### Solving post-installation errors - -On some operating systems, the installed binaries may not run properly and will give following error. - -```shell -./openssl: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory -``` - -This means that shared libraries cannot be found in the path where the system is looking for them. This can be solved by setting `LD_LIBRARY_PATH` variable as follows. - -```shell -export LD_LIBRARY_PATH=$PREFIX/lib:$LD_LIBRARY_PATH -``` - -It will be a good practice to include the above in the `bashrc` for debian users. diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_ruby.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_ruby.md deleted file mode 100755 index ae1fc8e..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_ruby.md +++ /dev/null @@ -1,104 +0,0 @@ -# Cross Compilation of Ruby - -## System Specifications - -**Build Architecture:** x86_64-linux-gnu -**Host Architecture:** riscv64-unknown-linux-gnu -**Operating System for Installation Procedure:** Ubuntu 20.04 - -## Pre-requisites - -Pre-requisites for installing ruby from source can be installed using the following command - -```shell -sudo apt-get -y install libc6-dev libssl-dev libmysql++-dev libsqlite3-dev make build-essential libssl-dev libreadline6-dev zlib1g-dev libyaml-dev -``` - -Other than this, ruby itself is needed for building ruby from source. - -```shell -sudo apt install ruby -``` - -There is another thing which needs to be taken care of before building ruby from source. If ruby is installed on system itself using `apt`, then cross compiling ruby will end up in an error as shown in the image below. This error is seen in `ruby 2.7.0p0 (2019-12-25 revision 647ee6f091) [x86_64-linux-gnu]`. - -![ruby_system_error](../doc_images/ruby_system_error.png) - -To tackle this issue, one workaround is to install build ruby for native system, then delete ruby which was installed through `apt`. This procedure will be added in the `Build` section. - -## Getting source code - -Source code of ruby can be obtained from github repository using the command below: - -```shell -git clone https://github.com/ruby/ruby.git -``` - -## Build - -### Installing ruby for native architecture - -Before cross-compiling, one must install ruby from source on the native machine which will solve the error described in `Pre-requisites` section above. - -- (*THIS STEP IS STRONGLY RECOMMENDED !*)In the source directory, create a folder with any name in which `Makefile` will be generated otherwise there will be a lot of files made in the source directory (possibly create copy of repo directory). -- In the source directory of ruby run following command to generate `configure` file. - -```shell -./autogen.sh -``` - -- After this, run the following `configure` command to generate `Makefile`. - -```shell -../configure --prefix=$PREFIX #$PREFIX is where you want to install binary files at the end, so replace it. -``` - -- After the above command is completed, run following command to start the build - -```shell -make -j$(nproc) #-j$(nproc) uses parallelism for make -``` - -- After the above command is complete, run following command to install the binaries on the specified path mentioned in `--prefix` above - -```shell -make install -``` - -- Now ruby should be available in the `$PREFIX` path (also in the `.bashrc`). Add $PREFIX path to $PATH variable and uninstall the the ruby installed using `apt` otherwise, the source will keep using that one for building and the error will persist. - -```shell -sudo apt purge ruby -``` - -### Cross-Compiling Ruby for `riscv64-unknown-linux-gnu` - -- After the ruby installed using `apt` is uninstalled from the system, clean the working directory with following command. - -```shell -make clean -``` - -- After cleaning the working directory, generate the `Makefile` again for cross-compiling ruby for `riscv64-unknown-linux-gnu` target and host using the command below - -```shell -../../configure --prefix=$PREFIX --build=x86_64-linux-gnu --host=riscv64-unknown-linux-gnu --target=riscv64-unknown-linux-gnu -``` - -- After the above command is successful, start build with following command - -```shell -make -j(nproc) -``` - -- Install the binaries in path mentioned with `--prefix` above with following command - -```shell -make install -``` - -- After this process, ruby will be installed inside `$PREFIX/` directory. - -**Note:** Currently, this process (as checked on version 3.1.2) installs ruby without extensions shown in the following image - -![ruby_extensions](../doc_images/ruby_extensions.png) diff --git a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_rust.md b/mkdocs_src/docs/Cross_Compiling/Cross_compiling_rust.md deleted file mode 100755 index 7aa1a3c..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Cross_compiling_rust.md +++ /dev/null @@ -1,74 +0,0 @@ -# Cross-compiling rust - -Rust is a programming language just like C but it focuses on safety of programs. This document describes how one can build rust for x86_64 and then add support to compile code for `riscv64` architecture. The executable for RISC-V architecture will be able to run on `qemu-riscv64`. - -_**Note:** Make sure riscv64-unknown-linux-gnu toolchain is installed on the machine._ - -## Getting source code - -Here the code will be taken from `rustup` GitHub repository. This is because instead of installing `cargo`, `rustup` and `rustc`, just compiling rustup will compile them along with itself. - -Get the source code using the command below: - -```shell -git clone https://github.com/rust-lang/rustup.git -``` - -Before starting installation process, if you want to install rust in a specific location, then set `CARGO_HOME` and `RUSTUP_HOME` variable in the directory where you want to install rustup. - -After that, considering you are in the repository directory, start the installation process using the command below: - -```shell -sh rustup-init.sh -y -``` - -After execution above command, follow the prompts as desired and complete the installation process. - -Now add riscv64 library support in rust using the command below: - -```shell -rustup target add riscv64gc-unknown-linux-gnu -``` - -Once this is complete, go to the desired location where a new project is to be created and use following command to create a project's directory structure. - -```shell -cargo new project_name # Use a meaningful project name -``` - -A directory with name `project_name` will be created as soon as the above command is executed successfully. This project will contain a `main.rs` which initially contains a `Hello World` program. - -Navigate to the `project_name` directory and create a folder with name `.cargo`, this will include a `config.toml` file inside it, which will tell it cargo during build that what is the target for which the compilation is to be done. The contents of the `project_name/.cargo/config.toml` will be as follows: - -```shell -[build] -target = "riscv64gc-unknown-linux-gnu" - -[target.riscv64gc-unknown-linux-gnu] -linker = "riscv64-unknown-linux-gnu-gcc" -``` - -Now the project will be ready to build. Get back to the `project_name/` directory and use following command to build the `main.rs` file: - -```shell -cargo build -``` - -The output of the above command should as follows: - -```shell - Compiling myproject v0.1.0 (project_name) - Finished dev [unoptimized + debuginfo] target(s) in 0.27s -``` - -After the above command, the executable with the name `project_name` will be available at following location: - -```shell -project_name/target/riscv64gc-unknown-linux-gnu/debug -``` - -Now the produced executable can be executed using qemu usermode. Use the following command to execute binary: - -```shell -qemu-riscv64 -L $RISCV_SYSROOT ./project_name -``` diff --git a/mkdocs_src/docs/Cross_Compiling/Overview.md b/mkdocs_src/docs/Cross_Compiling/Overview.md deleted file mode 100755 index 616ac4d..0000000 --- a/mkdocs_src/docs/Cross_Compiling/Overview.md +++ /dev/null @@ -1,17 +0,0 @@ -# Cross Compilation - -## Need of Cross Compilation - -Lets say you have a computer `A` having processor with architecture `a` and another computer `B` having processor with architecture `b`. Assume computer `A` has all the necessary tools and softwares whereas computer `B` does not have any software, tools, compilers and also the dependencies needed to install these tools and softwares. In this scenario, you cannot download the software setups (say tarballs) directly on computer `B` and have them installed/compiled because they will not be able to get compiled by computer `B` and will be completely useless. - -## Basic concept - -The basic workflow in such a condition will be as follows: - -1. Install a compiler on computer `A` such that it runs on computer `A` and compiles code for architecture `a`. -2. Using the above installed compiler, install a compiler on computer `A` such that it runs on computer `A` but compiles code for architecture `b`. Such a compiler is called cross compiler. One example of such a cross-compiler is [RISC-V GNU TOOLCHAIN](https://github.com/riscv-collab/riscv-gnu-toolchain). -3. Using the above compiled cross compiler, compile all the programs and also the compiler itself for architecture `b`. -4. Port all the compiled binaries to the computer `B`. - -Such a process is beneficial if you have created a custom architecture and there is no support available for it. -Throughout this documentation, the cross compiler used will be RISCV GNU Toolchain mentioned with `riscv64-unknown-linux-gcc` running on `x86_64` and compiling code for `riscv64` architecture whereas the native compiler will be `gcc` running on `x86_64` and compiling code for `x86_64` unless otherwise specified. diff --git a/mkdocs_src/docs/Installing_ssl_certificates_new.md b/mkdocs_src/docs/Installing_ssl_certificates_new.md deleted file mode 100755 index 676defd..0000000 --- a/mkdocs_src/docs/Installing_ssl_certificates_new.md +++ /dev/null @@ -1,40 +0,0 @@ -# Configure `letsencrypt` with jenkins - -SSL certificates allow a website to run on HTTPS protocol which makes sure that data transfer between user and server cannot be intercepted between them. When using HTTPS protocol the data transfer is encrypted between user and server being used. - -_**NOTE:** Keep the fact in mind that letsencrypt does not generate certificates for bare IP. It needs a domain name._ - -## Installing SSL certificates with `letsencrypt` - -In unix, letsencrypt package is called `certbot`. This can be installed using following command: - -```shell -sudo apt install certbot -``` - -Here standalone mode will be used for generating certificates which authenticates the machine's ownership by hosting a temporary server on port 80. This is because ports less than 1024 are privileged and require root/sudo access. So make sure that port 80 is open in firewall and there is no application already running on it. Also if this process is being done on container, make sure port 80 is forwarded properly. One can also use webroot mode to generate certificates. See this [link](https://eff-certbot.readthedocs.io/en/stable/using.html) for more information. - -Use following command to generate certificate files and keys in `/etc/letsencrypt/live/your.domain.name/` - -```shell -certbot certonly --standalone -d your.domain.name -``` - -This will produce `cert.pem`, `fullchain.pem` and `privkey.pem`. - -Copy these files to location where `.war` of jenkins is located. Make sure to change the permission from root to jenkins' user. After that, use following command to convert certificate files to `pkcs12` format. - -```shell -openssl pkcs12 -inkey privkey.pem -in fullchain.pem -export -out keys.pkcs12 -keytool -importkeystore -srckeystore keys.pkcs12 -srcstoretype pkcs12 -destkeystore keystore - -``` - -This will ask for a password. Here it will be denoted as ``. -Use following command to start jenkins server with generated SSL certificates. - -```shell -java -jar jenkins.war --httpPort=-1 --httpsPort= --httpsKeyStore=keystore --httpsKeyStorePassword= -``` - -After this, go to browser and type `your.domain.name:` to start jenkins with https protocol. diff --git a/mkdocs_src/docs/Usage_Monitoring/Integrating_prometheus_grafana.md b/mkdocs_src/docs/Usage_Monitoring/Integrating_prometheus_grafana.md deleted file mode 100755 index ae42d30..0000000 --- a/mkdocs_src/docs/Usage_Monitoring/Integrating_prometheus_grafana.md +++ /dev/null @@ -1,41 +0,0 @@ -# Integrating Prometheus with Grafana - -Before starting to integrate Prometheus with Grafana make sure they are set up properly. See [this](/docs/Usage_Monitoring/Prometheus_Grafana.md) document for setting up Prometheus and Grafana as standalone tools. - -## Configuration in Prometheus - -Make sure all the `node_exporter` are running properly and the compute instances are discoverable by prometheus. Use following command for checking if the instances are accessible: - -```shell -telnet COMPUTE_INSTANCE_IP COMPUTE_INSTANCE_PORT -``` - -If you get following response, then it means there is something wrong with firewall of compute instances as they are not accessible. - -```shell -Name or service not known -``` - -## Configuration in Grafana - -Enter the grafana IP in browser and login with credentials. - -Use following steps to integrate Grafana with Prometheus and creating dashbaords in Grafana - -- Go to Grafana settings -- Go to `Data sources` tab -- Click on `Add data source` -- In `HTTP` section, add IP of machine on which Prometheus is running along with the port on which Prometheus is hosted (it is recommended to host Prometheus on the machine where Grafana is hosted, in that case, Prometheus will be accessed using LAN IP). -- In `Type and version` section, select Prometheus from drop down menu and choose a version of Prometheus in the next field. -- Click on `Save and test` - -If everything goes right, then it will say `Data source is working`. - -## Creating a dashboard in Grafana - -- For creating a dashboard, click on `Dashboards` -- Click on `New` and then `New Dashboard` -- Then add a panel inside the dashbaord -- If you know the mathemtical expressions (which are easier and more verbose) then select `Code` while creating the panels instead of `Builder`. - -_**Note:** Be sure to save the dashbaord because otherwise, it will not make changes on its own and dashboard will be lost._ diff --git a/mkdocs_src/docs/Usage_Monitoring/Prometheus_Grafana.md b/mkdocs_src/docs/Usage_Monitoring/Prometheus_Grafana.md deleted file mode 100755 index edf2b2b..0000000 --- a/mkdocs_src/docs/Usage_Monitoring/Prometheus_Grafana.md +++ /dev/null @@ -1,81 +0,0 @@ -# Setting up Promtheus and Grafana - -## What are Prometheus and Grafana - -Prometheus is a tool for monitoring usage of memory, CPU etc. It takes queries as mathematical expressions (like SQL) and returns the interactive graphical usage stats. It collects data using `node_exporter`, which is used to get raw data from the compute instances - -Grafana is used for creating dashboards in which there can be different panels containing the graphical statistics from Prometheus. Grafana dashboards are more interactive and user-friendly than Prometheus. - -## Setting up Node Exporters - -Node exporters are needed for prometheus to get data from the compute instances. -Download pre-compiled tarball of node_exporters from [this]() link and place the tarballs on compute instances. - -Once `node_exporter` tarball is downloaded, extract it using following command: - -```shell -tar -xvf node_exporter-x.x.x.linux-amd64.tar.gz -``` - -Run `node_exporter` on compute instances using command below (which will run node_exporter on port 9100 by default): - -```shell -./node_exporter -``` - -Follow this procedure on every compute instance which is desired to be monitored using Prometheus. - -_**Note:** Make sure that compute instances can be accessed in the machine in which prometheus is installed._ - -## Configuring Prometheus - -Use [this]() link to get pre-compiled tarball of Prometheus. - -Once Prometheus is downloaded, extract it using following command: - -```shell -tar -xvf prometheus-x.xx.x.linux-amd.tar.gz -``` - -Now either configure `prometheus.yml` file or create another `yml` file which will include the addresses of node_exporters. Following is an example template for `yml` file: - -```output -global: - scrape_interval: 15s - -scrape_configs: -- job_name: node - static_configs: - - targets: ['node_exporter_ip1:node_exporter_port1'] - - targets: ['node_exporter_ip2:node_exporter_port2'] -``` - -## Running Prometheus - -Once prometheus is configured properly, use following command to run prometheus. - -```shell -./prometheus -``` - -_Note: By default Prometheus looks for `prometheus.yml`, if you want to use some other file for configuration, you will need to specify explicitly with `--config.file` option._ - -## Running Grafana - -Once Prometheus is up and running, Grafana can be used to create dashboards with graphical interface in various panels inside dashboards. - -Use [this]() to get pre-compiled tarball of Grafana - -After Grafana is ready to be run, use following command to run Grafana: - -```shell -./grafana-server #By default it will run on port 3000 -``` - -The default username and password of Grafana will be `admin`. - -Once Grafana is installed, a dashboard can be created to with multiple panels which will show data from Prometheus. - -## Reference links - - diff --git a/mkdocs_src/docs/jenkins_github_integration/Github_PR_webhook_integration.md b/mkdocs_src/docs/jenkins_github_integration/Github_PR_webhook_integration.md deleted file mode 100755 index 4c19a72..0000000 --- a/mkdocs_src/docs/jenkins_github_integration/Github_PR_webhook_integration.md +++ /dev/null @@ -1,97 +0,0 @@ -# GitHub Pull Request hook integration with Jenkins - -**Reference Link:** - -## General Guidline - -If facing an issue, it is better to launch the Jenkins from running the .war file (which is obtained by building the Jenkins github repository) in terminal (which is mostly bash in case of linux) and observing the terminal's output. For example if a webhook is not created, the GUI may not show anything but the terminal will most probably print a message showing the reason for this behavior. - -## GitHub Pull Request Builder Plugin - -In version control, it is better to check the changes and run tests before the changes are merged into main branch. Sometimes those changes can be enormous to check. So it becomes difficult for requested reviewer of the pull request to check all the changes and run tests on them manually. For that purpose, it is better to automate the process so that whenever a pull request is generated, all the tests are triggered and based on the results, the reviewer decides whether or not to merge the branch with main. - -This can be achieved using Jenkins' `Github Pull Request Builder` plugin. - -## Specifications at the time of documentation - -**Operating System:** Linux -**Distribution:** Ubuntu -**Release:** Focal (also known as 20.04) -**Jenkins version:** 2.371 (can be seen in config.xml) -**Github Webhook Builder version:** 1.42.2 - -## Pre-Requisites - -- Jenkins. -- Git Plugin -- GitHub Plugin -- Github Pull Request Builder Plugin -- Github account and repository with permission to generate a pull request to merge the branch. - -## Setting up Jenkins configuration - -- Install the above mentioned plugins from `Dashboard > Manage Jenkins > Manage Plugins > Available Plugins`. -- Go to `Dashboard > Manage Jenkins > Configure System`. -- Scroll down to `Github Pull Request Builder`. -- Leave `GitHub Server API URL` and `Jenkins URL override` as it is. -- In `Credentials`, click on add and select `Jenkins` from drop down. - - Select `Kind` as `Secret text`. - - In `Secret`, add GitHub Personal authentication token which can be acquired from GitHub account settings. - - Add some safe description to remind what these credentials are about otherwise jenkins use a lot of credentials and it gets difficult to keep account of them. - - Leave `ID` empty. - - Click on `Add`. -- Now select the added credentials from the drop down menu of `Credentials`. -- Click on `Test Credentials...`. -- Check `Test basic connection to GitHub`. -- Click on `Connect to API`. This will show the message `Connected to as `. -- Other settings can be left empty. -- Click on save. - -## Setting up Jenkins job - -In this documentation, `Pipeline` job will be used, but any job is expected to work fine with these settings. For this documentation it is considered that the `jenkinsfile` for building the pipeline is present in the repository which is to be built. - -- Go to `Dashboard` and click `New Item`. -- Enter a Job name (here it will be `github_PR_webhook`). -- Click `OK`. It will navigate to the job's configuration page. - - Add a `Description` of choice. - - Scroll down and check `GitHub project`. Add the URL of GitHub repository. Here the URL of the repository should be added without `.git` extention. This is **important** that the person who is creating pull request should be either in `admin list` or `whitelist` because otherwise, the webhook will not be created. - - Scroll down to `Build Triggers` section. - - Check `GitHub Pull Request Builder`. This will open further configurations for this option. - - Leave `GitHub API credentials`. - - Add a GitHub admin's username in the `Admin list`. This is important becaue otherwise, the checks will not run on generating a pull request. - - Check `Use github hooks for build triggering`. - - Click on `Advanced` and in `Whitelist Target Branches` add the branch name for which, when a pull request is generated, the job is supposed to be trigger (here it is `main`). - - Scroll down to `Pipeline` section. - - In `Definition`, select `Pipeline script from SCM`. - - Select `SCM` as `Git` from drop down. - - Enter the `Repository URL` from GitHub. - - Enter `Credentials` with access to this repository (This is optional if the repository is public). - - Under `Advanced`, enter `Refspec` as `+refs/pull/*:refs/remotes/origin/pr/*`[^note]. - - If the tests are to be run on actual commit in the pull request then, under `Branches to build` section, in `Branch Specifier`, enter `${ghprbActualCommit}`[^note]. - - Leave other settings as it is. - - In `Script Path`, add the path and name of the `jenkinsfile` which is present in GitHub repository. - - Uncheck `Lightweight checkout`[^note1] - - Click `Apply` and then `Save`. -- After saving the job, a webhook should be created automatically in GitHub if the credentials provided in the settings are correct. - -## Verifying if the procedure - -- In the GitHub repository, add another branch aside from `main`. -- For this, expand `main` and click on `View all branches`. -- Click on `New Branch`, and insert a name. -- After a new branch is created, select new branch instead of `main` in repository page. -- Add some changings to either of the file (even adding a space is enough). -- Commit Changes. -- Create a Pull request. -- Now after checking the merge conflicts, the checks will run their results will be shown with pull request (as can be seen in the image below). -_**NOTE:** The Jenkinfile will run present in the pull request and not in the main branch._ - -![Selection_013](<../doc_images/Selection_013.png>) - -- Clicking on the `Details` navigates the user to the Jenkins Job result page where the console output and each stage can be seen. - -[^note]: - This point is taken from the jenkins `GitHub Pull Request Builder` plugin documentation at -[^note1]: - This is an issue mentioned in the documentation of `Github Pull Request Builder` plugin at . diff --git a/mkdocs_src/docs/jenkins_github_integration/Github_push_webhook.md b/mkdocs_src/docs/jenkins_github_integration/Github_push_webhook.md deleted file mode 100755 index 1ef45cc..0000000 --- a/mkdocs_src/docs/jenkins_github_integration/Github_push_webhook.md +++ /dev/null @@ -1,117 +0,0 @@ -# Github 'Push' webhook integration with Jenkins - -## Purpose of using github webhook integration with jenkins - -Most of the time, after a push on the upstream repository, one may want to check the result of all the checks on the repository defined by CI/CD pipeline. This tells whether there is some issue with push and whether or not the defined checks/tests have passed. This can be achieved using github push webhook integration with jenkins. - -## Jenkins version and operating system specifications - -The version of Jenkins and operating system specifications at the time of writing this documentation are mentioned below: -**Jenkins version:** 2.370 -**Operating System:** Linux -**Distribution:** Ubuntu -**Release:** Focal (also called 20.04) - -## Pre-requisites - -- Jenkins -- ngrok (only if a public IP is not available) - -## Setting up the ngrok - -The localhost cannot be used for github webhook integration as it cannot be detected by online webservers. For this reason, a public ip must be used. For the sake of this documentation, ngrok is being used, which maps localhost to some public ip which can then be accessed publicly on the internet. - -Following steps can be used for setting up ngrok on ubuntu: - -- Install ngrok. - -```shell -sudo apt install ngrok -``` - -- For using html content, a sign up is required on ngrok. So sign up on ngrok. -- Execute the following command to run ngrok which will provide a public ip mapped to localhost. - -```shell -ngrok http -``` - -This will setup ngrok and provide a public ip for working online. - -## Setting up Jenkins for github webhook - -### Pre-requisites - -Following plugins should be installed in jenkins: - -- Git Plugin -- GitHub API Plugin -- GitHub Plugin - -### Jenkins Configuration in Settings - -- Go to `Manage Jenkins > Configure System` and scroll down to `GitHub` section. -- Click on `Add GitHub Server` - - Add a name for the Github server. - - Leave `API URL` as is. - - In Credentials, click on `Add`. `Jenkins` will appear in drop down, click on it. - - Select `Kind` as `Secret text` - - Scroll down to the `Secret` and here, put down the github personal authentication token (PAT) which can be acquired from github account. - - Other options can be left unattended. - - Click on `Add`. Now the credentials should be added. - - In the `Credentials` drop down, select `Secret text`. - - Check `Manage hooks`. - - Now the connection can be established and can be checked by clicking the `Test Connection`. -- Click `Save`. - -### Jenkins job setup for Github Webhook - -Create a new jenkins freestyle job and proceed with following settings on the configurations along with desired settings. - -- Check `GitHub project` in `General` section and provide GitHub repository URL. -- In `Source Code Management` section, select `Git`. - - Give Repository URL - - In `Credentials` (if the credentials are not created already), click on `Add`, click on `Jenkins` from the drop down. - - Select `Kind` as `Username with password`. - - In `Username`, enter github username. - - In `Password`, enter github personal authentication token (PAT) which can be acquired from github account. - - Other fields can be left unattended. - - Click on `Add` - - From `Credentials` drop down, select your added credentials. - - In `Branches to build` section, in `Branch Specifier` field, enter the name of branch of github repository which needs to be built. -- In `Build Triggers` section, check `GitHub hook trigger for GITScm polling`. - _(Following step is for checking the commit status according to the Jenkins job status, means if Jenkins job fails, commit status is also `Failure`)_ -- Scroll down to the bottom and `Add post-build action`. From drop down, select `Set GitHub commit status`. - - Leave other settings as is and click on `Advanced`. - - Check `Handle errors` - - Under drop down `Result on failure`, select `FAILURE` -- Click on `Apply` and `Save` -- By this point, Jenkins is setup for github webhooks. - -## Setting up github repository webhook - -For the sake of this documentation, I have created a simple repository called `jenkins_hello_world_integrated`. - -- Go to GitHub repository's settings - -![Screenshot from 2022-09-26 16-39-29](<../doc_images/192272619-657a40c5-ef9e-4a48-a2b0-17217ebcac70.png>) - -- In `Webhooks` section, click on `Add webhook` - -![Screenshot from 2022-09-26 16-39-29](<../doc_images/Selection_001.png>) - -- In Webhooks settings: - - Add `Payload URL` as the URL of jenkins and append `/github-webhook/` at the end of it. - - Select Content type `application/jason`. - - It is recommended to add `Secret` which can be generated by jenkins `API Token` by going to account configuration. - - It is recommended to `Enable SSL verification`. - - Select the events which should trigger the build in jenkins. - - Check `Active`. - - Click on `Add Webhook`. - - ![Screenshot from 2022-09-26 16-39-29](<../doc_images/Selection_002.png>) - -After this point each time the github repository is commited with a change, jenkins job will start the build and will also denote on the repository if the build has passed or failed (as can be seen in the below screenshot). - -![Screenshot from 2022-09-26 16-39-29](<../doc_images/Screenshot from 2022-09-26 16-39-29.png>) - \ No newline at end of file