Skip to content

Commit

Permalink
docs: add documentation with workflow
Browse files Browse the repository at this point in the history
  • Loading branch information
paoloyx committed Jun 19, 2024
1 parent 6476b81 commit 0981ec2
Show file tree
Hide file tree
Showing 25 changed files with 39,632 additions and 0 deletions.
61 changes: 61 additions & 0 deletions .github/workflows/docs-workflow.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
name: Test deployment

on:
push:
tags:
- 'v*.*.*'
branches:
- "main"
paths:
- "docs/**"
- ".github/workflows/docs-**"
pull_request:
branches:
- "*"
paths:
- "docs/**"
- ".github/workflows/docs-**"

jobs:
build:
name: Build Docusaurus
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 18
cache: yarn

- name: Install dependencies
run: yarn install --frozen-lockfile
- name: Build website
run: yarn build

- name: Upload Build Artifact
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
uses: actions/upload-pages-artifact@v3
with:
path: build

deploy:
name: Deploy to GitHub Pages
needs: build

# Grant GITHUB_TOKEN the permissions required to make a Pages deployment
permissions:
pages: write # to deploy to Pages
id-token: write # to verify the deployment originates from an appropriate source

# Deploy to the github-pages environment
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}

runs-on: ubuntu-22.04
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
20 changes: 20 additions & 0 deletions docs/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Dependencies
/node_modules

# Production
/build

# Generated files
.docusaurus
.cache-loader

# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local

npm-debug.log*
yarn-debug.log*
yarn-error.log*
47 changes: 47 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Website

This website is built using [Docusaurus](https://docusaurus.io/), a modern static website generator.

### Installation

```
yarn
```

or you can use npm

```
npm install docusaurus
```

### Local Development

```
yarn start
```

This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.

### Build

```
yarn build
```

This command generates static content into the `build` directory and can be served using any static contents hosting service.

### Deployment

Using SSH:

```
USE_SSH=true yarn deploy
```

Not using SSH:

```
GIT_USER=<Your GitHub username> yarn deploy
```

If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
3 changes: 3 additions & 0 deletions docs/babel.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};
8 changes: 8 additions & 0 deletions docs/docs/community/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Community",
"position": 4,
"link": {
"type": "generated-index",
"description": "Join the Community."
}
}
Empty file.
8 changes: 8 additions & 0 deletions docs/docs/contributing/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Contributing",
"position": 3,
"link": {
"type": "generated-index",
"description": "How to contribute."
}
}
Empty file.
8 changes: 8 additions & 0 deletions docs/docs/getting-started/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Getting Started",
"position": 2,
"link": {
"type": "generated-index",
"description": "Getting started with the AI Monitoring Platform."
}
}
28 changes: 28 additions & 0 deletions docs/docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
sidebar_position: 1
---

# Introduction
Let's discover the **Radicalbit AI Monitoring Platform** in less than 5 minutes.

## Welcome!
This platform provides a comprehensive solution for monitoring and observing your Artificial Intelligence (AI) models in production.

### Why Monitor AI Models?
While models often perform well during development and validation, their effectiveness can degrade over time in production due to various factors like data shifts or concept drift. The Radicalbit AI Monitor platform helps you proactively identify and address potential performance issues.

### Key Functionalities
The platform provides comprehensive monitoring capabilities to ensure optimal performance of your AI models in production. It analyzes both your reference dataset (used for pre-production validation) and the current datasets in use, allowing you to put under control:
* **Data Quality:** evaluate the quality of your data, as high-quality data is crucial for maintaining optimal model performance. The platform analyzes both numerical and categorical features in your dataset to provide insights into
* *data distribution*
* *missing values*
* *target variable distribution* (for supervised learning).

* **Model Quality Monitoring:** the platform provides a comprehensive suite of metrics specifically designed at the moment for binary classification models. These metrics include:
* *Accuracy, Precision, Recall, and F1:* These metrics provide different perspectives on how well your model is classifying positive and negative cases.
* *False/True Negative/Positive Rates and Confusion Matrix:* These offer a detailed breakdown of your model's classification performance, including the number of correctly and incorrectly classified instances.
* *AUC-ROC and PR AUC:* These are performance curves that help visualize your model's ability to discriminate between positive and negative classes.
* **Model Drift Detection:** analyze model drift, which occurs when the underlying data distribution changes over time and can affect model accuracy.

### Current Scope and Future Plans
This initial version focuses on binary classification models. Support for additional model types is planned for future releases.
8 changes: 8 additions & 0 deletions docs/docs/quickstart-tour/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Quickstart Tour",
"position": 2,
"link": {
"type": "generated-index",
"description": "Quickstart the AI Monitoring Platform."
}
}
Empty file.
8 changes: 8 additions & 0 deletions docs/docs/user-guide/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "User Guide",
"position": 1,
"link": {
"type": "generated-index",
"description": "Learn how to install and use the AI Monitoring Platform."
}
}
85 changes: 85 additions & 0 deletions docs/docs/user-guide/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
sidebar_position: 1
---

# Installation
The platform is composed by different modules
* **UI:** the front-end application
* **API:** the back-end application
* **Processing:** the Spark jobs
* **SDK:** the Python SDK

## Development & Testing with Docker Compose
You can easily run the platform locally using Docker and the provided Docker Compose file.

**Important:** This setup is intended for development and testing only, not for production environments.

### Prerequisites
To run the platform successfully, you'll need to have both Docker and Docker Compose installed on your machine.

### Procedure
Once you've installed Docker and Docker Compose, clone the repository to your local machine:

```bash
git clone [email protected]:radicalbit/radicalbit-ai-monitoring.git
```

This repository provides a Docker Compose file to set up the platform locally alongside a Rancher Kubernetes cluster. This allows you to deploy Spark jobs within the cluster.

For streamlined development and testing, you can execute these steps to run the platform locally without the graphical user interface:

```bash
docker compose up
```

If you want to access the platform's user interface (UI):

```bash
docker compose --profile ui up --force-recreate
```

After all containers are up and running, you can access the platform at [http://localhost:5173](http://localhost:5173) to start using it.

*Notes: The `--force-recreate` flag forces Docker Compose to restart all containers, even if running. This is useful when making configuration or image changes and wanting a fresh start. [More info](https://docs.docker.com/reference/cli/docker/compose/up/)*

#### Start from a clean workspace
To ensure a clean environment for running the platform, it's recommended to remove any existing named volumes and container images related to previous runs. You can find detailed information about this process in the [Docker Compose documentation](https://docs.docker.com/reference/cli/docker/compose/down/)

```bash
docker compose down -v --rmi all
```

The `-v` flag is optional but recommended in this case. It removes any named volumes associated with the platform, such as those used for storing data from services like Postgres or Kubernetes. The `--rmi all` flag also removes all images that were defined in the Docker Compose file. By default, `docker-compose down` only removes running containers, so these flags ensure a clean state for starting the platform.

If you want to delete just volume data run:

```bash
docker compose down -v
```

#### Accessing the Kubernetes Cluster
The platform creates a Kubernetes cluster for managing deployments. You can connect and interact with this cluster from your local machine using tools like Lens or `kubectl`.

##### Using the kubeconfig File
A file named `kubeconfig.yaml` is automatically generated within the directory `./docker/k3s_data/kubeconfig/` when the platform starts. This file contains sensitive information used to authenticate with the Kubernetes cluster.

##### Security Considerations (Important!)
*Do not modify the original `kubeconfig.yaml` file.* Modifying the server address within the original file can potentially expose the cluster to unauthorized access from outside your local machine.

*Instead, create a copy of the `kubeconfig.yaml` file and modify the copy for local use.* This ensures the original file with the default server address remains secure.

##### Here's how to connect to the cluster:
1. Copy the `kubeconfig.yaml` file to a desired location on your local machine.
1. Edit the copied file and replace the server address `https://k3s:6443` with `https://127.0.0.1:6443`. This points the kubeconfig file to the local Kubernetes cluster running on your machine.
1. Use the modified `kubeconfig.yaml` file with tools like Lens or `kubectl` to interact with the cluster.

#### Using Real AWS Credentials
In order to use a real AWS instead of MinIO is necessary to modify the environment variables of the api container, putting real `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION` and `S3_BUCKET_NAME` and removing `S3_ENDPOINT_URL`.

#### Teardown
To clean the environment or if something happens and a clean start is needed:

* Stop the docker compose
* Remove all containers
* Remove the volume
* Delete the `./docker/k3s_data/kubeconfig` folder
Loading

0 comments on commit 0981ec2

Please sign in to comment.