Skip to content

Commit

Permalink
Merge pull request #60 from amosproj/dev
Browse files Browse the repository at this point in the history
Merge result of sprint 4 into main
  • Loading branch information
Omega65536 authored Nov 13, 2024
2 parents 67ffba9 + f5174f9 commit 6b22741
Show file tree
Hide file tree
Showing 38 changed files with 2,634 additions and 431 deletions.
6 changes: 6 additions & 0 deletions .env.docker.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
#Copy and rename this file to .env.docker
DATABASE_HOST="host.docker.internal"
DATABASE_PORT=5433
DATABASE_USER="postgres"
DATABASE_PASSWORD="postgres"
DATABASE_DATABASE="postgres"
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ pids
# Python
*.pyc
__pycache__/
.venv/

# vscode
.vscode/
Expand All @@ -37,8 +38,11 @@ bin/

# Local env files
apps/*/.env
.env.docker

.nx/cache
.nx/workspace-data
**/vite.config.{js,ts,mjs,mts,cjs,cts}.timestamp*
.angular

.cache/*
9 changes: 9 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Container for the shared node module
FROM node:18-alpine

WORKDIR /app

COPY . .

RUN npm i -g [email protected]
RUN npm install
41 changes: 33 additions & 8 deletions Documentation/README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,47 @@
Build, user, and technical documentation
# Build, user, and technical documentation

Software architecture description

basic setup:
## Basic setup:

```bash
npm ci
cd ./apps/analyzer/metadata_analyzer ; poetry install
```

- `npm ci`: dependency install
- copy `.env.example` file and rename to `.env` (adjust database properties according to database setup if necessary)

- copy `.env.example` file in backend and rename to `.env` (adjust database properties according to database setup if necessary)
- copy `.env.example` file in analyzer and rename to `.env` (adjust port properties according to backend setup if necessary)
- To insert dummy data into table backupData you can use the SQL script `dummyData.sql` in `apps/backend/src/app/utils`

running the code locally:
### Running the code locally:

- `npm run be`: run backend individually
- `npm run fe`: run frontend individually
- `npm run both`: run backend and frontend
- `npm run py` : run python app
- `npm run all`: run backend, frontend and python module

generating database migrations:
### Generating database migrations:

- the entity files need to be annotated with `@Entity(<table-name>)`
- append the entity file to the `entities` array in `db-config.service.ts`
- run the following command to generate a migration file:
- `nx run metadata-analyzer-backend:migrations:generate --name <migration-name>`
- append the generated file to the `migrations` array in `db-config.service.ts`
- `nx run metadata-analyzer-backend:migrations:generate --name <migration-name>`
- append the generated file to the `migrations` array in `db-config.service.ts`



## Installing new dependencies

### in python app

When working with python dependencies, first cd into the `analyzer` folder.

#### Install a new dependency

`poetry add <dependency-name>`

#### Remove a dependency

`poetry remove <dependency-name>`
33 changes: 31 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,31 @@
# Backup Metadata Analyzer (AMOS WS 2024/25)
Something something something
# AMOS Backup Metadata Analyzer


## Prerequisites
Make sure the following are installed on your machine:
- **Node 20**
- **Docker**
- **Docker Compose**

## Setup Instructions

1. **Clone the repository**:
```bash
git clone https://github.com/amosproj/amos2024ws02-backup-metadata-analyzer.git

2. **Change directory**:
```bash
cd ./amos2024ws02-backup-metadata-analyzer/
3. **Setup .env files**:
```bash
cp .env.docker.example .env.docker
cp apps/backend/.env.example apps/backend/.env

4. **Docker compose up**:
```bash
docker-compose --env-file .env.docker up --build
5. **Docker compose down**:
```bash
docker-compose --env-file .env.docker down
2 changes: 2 additions & 0 deletions apps/analyzer/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
FLASK_RUN_HOST="localhost"
FLASK_RUN_PORT="8000"
11 changes: 11 additions & 0 deletions apps/analyzer/.flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
[flake8]
exclude =
.git,
__pycache__,
build,
dist,
.tox,
venv,
.venv,
.pytest_cache
max-line-length = 120
1 change: 1 addition & 0 deletions apps/analyzer/.python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.11.2
9 changes: 9 additions & 0 deletions apps/analyzer/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# metadata-analyzer

Rename .env.example to .env, adjust necessary values

if flask imports are not recognized, potential vscode problem:
strg + shift + p
-> Select Interpreter
-> Enter Interpreter Path
(select python that lies in .venv directory)
4 changes: 4 additions & 0 deletions apps/analyzer/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from metadata_analyzer.main import main

if __name__ == "__main__":
main()
1 change: 1 addition & 0 deletions apps/analyzer/metadata_analyzer/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
"""Automatically generated by Nx."""
20 changes: 20 additions & 0 deletions apps/analyzer/metadata_analyzer/database.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import pg8000.dbapi
from sqlalchemy import create_engine, select
from sqlalchemy.orm import Session
from metadata_analyzer.models import BackupData
import os


class Database:
def __init__(self):
self.engine = create_engine("postgresql+pg8000://postgres:postgres@localhost:5432/postgres")


def get_data(self):
session = Session(self.engine)
stmt = select(BackupData)

result = session.scalars(stmt)
return result


44 changes: 44 additions & 0 deletions apps/analyzer/metadata_analyzer/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
from flask import Flask, request, jsonify
from dotenv import load_dotenv
from metadata_analyzer.database import Database, get_data
from metadata_analyzer.simple_analyzer import SimpleAnalyzer
import os

app = Flask(__name__)

@app.route("/")
def hello_world():
return "Hello, world!"


@app.route("/echo", methods=["POST"])
def echo():
if request.method == "POST":
data = request.get_json()
obj = data["body"]
strData = obj["text"]
newData = ""

for i in range(len(strData) - 1, -1, -1):
newData = newData + strData[i]

newBody = '{ "output": "' + newData + '" }'
return newBody

@app.route("/analyze", methods=["GET"])
def analyze():
data = list(get_data(database))
result = simple_analyzer.analyze(data)

return jsonify(result)

def main():
global database
global simple_analyzer
database = Database()
simple_analyzer = SimpleAnalyzer()

new_port = os.getenv("FLASK_RUN_PORT")
int_port = int(new_port or 5000)
print("int_port: " + str(int_port))
app.run(host="localhost", port=int_port, debug=False)
19 changes: 19 additions & 0 deletions apps/analyzer/metadata_analyzer/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
from sqlalchemy.orm import mapped_column, Mapped
from sqlalchemy.ext.declarative import declarative_base
from datetime import datetime

Base = declarative_base()

class BackupData(Base):
__tablename__ = "BackupData"

id: Mapped[str] = mapped_column(primary_key=True)
sizeMB: Mapped[int]
creationDate: Mapped[datetime]
bio: Mapped[str]

def __repr__(self):
return f"""BackupData(id={self.id}, sizeMB={self.sizeMB}, creationDate={self.creationDate}, bio={self.bio!r})"""

def __str__(self):
return repr(self)
15 changes: 15 additions & 0 deletions apps/analyzer/metadata_analyzer/simple_analyzer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
class SimpleAnalyzer:
def __init__(self):
pass

def analyze(self, data):
count = len(data)
dates = list(map(lambda backup_data: backup_data.creationDate, data))
sizes = list(map(lambda backup_data: backup_data.sizeMB, data))
return {
"count": count,
"firstBackup": min(dates),
"lastBackup": max(dates),
"minSize": min(sizes),
"maxSize": max(sizes),
}
Loading

0 comments on commit 6b22741

Please sign in to comment.