Repo management for Nethsecurity installations.
Something went wrong with some packages? Go straight to the faulty packages section. 🚀
The application is structured as follows:
nginx
: Nginx frontend that handles the http requests and proxies them to the php application.php
: Main application that handles the logic and dispatches the jobs.scheduler
: Scheduler that handles cron jobs for the php application.worker
: Worker that handles the jobs dispatched by the php application.
The storage can be configured to use different disks, however a local shared storage is mandatory due to the sqlite
database that php
, scheduler
and worker
containers share.
Base file request:
sequenceDiagram
client ->> nginx/php: Request file
database ->> nginx/php: Get repository
opt repository not found
nginx/php ->> client: 404 Not Found
end
database ->> nginx/php: Check cache
alt cache miss
nginx/php ->> my: Ask for authorization
my ->> nginx/php:
alt authorized
nginx/php ->> database: Save cache
else not authorized
nginx/php ->> client: 403 Forbidden
end
end
nginx/php ->> filesystem: Check if file exists
alt file does not exists
nginx/php ->> client: 404 Not Found
else filesystem supports temporary links
nginx/php ->> client: Redirect to filesystem link
client ->> filesystem: Request file
else manual download
filesystem ->> nginx/php:
nginx/php ->> client: Download file
end
Job dispatch sequence:
sequenceDiagram
participant php
participant scheduler
participant database
scheduler ->> database: Dispatch job
php ->> database: Dispatch job
database ->> worker: Loads job
worker ->> worker: Run job
worker ->> database: Clear job
Podman
and Podman Compose
can be used as an alternative to Docker and Docker Compose, however no deep testing has
been done with these tools.
Copy the .env.example
file to .env
and edit the entries as needed.
Most of the environment variables are self-explanatory and there's no need to change their defaults unless explicitly told so. However, there are a few that you might want to change:
-
APP_TIMEZONE
: The timezone to use for the app. Even if the container inherits the host's timezone, it's recommended to set this value to avoid any issues. -
FILESYSTEM_DISK
: Disk to use for I/O ops (cloning repositories and snapshots), defaults tolocal
(storage/app directory). If you want to use a different disk, you need to set the corresponding values for the disk you want to use.For example, you can directly connect to a DO Space by filling up the AWS_* values with the corresponding values from the DO Space.
FILESYSTEM_DISK=s3 AWS_ACCESS_KEY_ID=your_access_key AWS_SECRET_ACCESS_KEY=your_secret_key AWS_DEFAULT_REGION=region of the bucket AWS_BUCKET=name of the bucket AWS_ENDPOINT=https://<region of the bucket>.digitaloceanspaces.com
Additional docs can be found in Laravel Documentation.
-
UID
: The user ID for the development environment, set this before running any other command, if this value changes you will need to run the command under Build images again. -
GID
: The group ID for the development environment, set this before running any other command, if this value changes you will need to run the command under Build images again.
To build the development images, you just run the following command:
docker compose build
Now we just miss a few steps that will need to be run only once:
docker compose run --rm php php artisan key:generate
You're almost there! Run the following command to start up all the needed services:
docker compose up
You can find the app running at http://localhost:8080
.
To run any commands inside the development environment, you need to get to the shell using:
docker compose exec app bash
Software is being tested using PestPHP. To run the tests, you can use the provided command inside the development environment:
php artisan test
GitHub Actions takes care of the deployment of the images to the registry, however if you want to build the production image yourself follow the instructions below.
docker buildx bake -f docker-bake.hcl production
You will find the images tagged as ghcr.io/nethserver/parceler-*:latest
.
The production environment is composed by the following services:
nginx
: nginx frontend that handles http requests.php
: php-fpm that runs a Parceler instance.scheduler
: scheduler dispatcher for worker.worker
: worker that handles all jobs sent to queues.
It's advised to use a reverse proxy to handle the SSL termination and load balancing.
The parceler service is being configured through an environment file, you can find the example in .env.prod.example
.
While some of the values are self-explanatory, there are a few that you need to manually set:
APP_KEY
: The application key, you can generate one using the development environment usingphp artisan key:generate --show
.APP_URL
: The full URL where the application is reached from, while most of the functionalities will work with a wrong value, the url generation is based off this value.FILESYSTEM_DISK
: Disk to use during production, works same as development, more info in the development setup.REPOSITORY_MILESTONE_TOKEN
: Token used to trigger from remote the milestone creation, you can set this to a random value, it's used to avoid unwanted requests.
Now that parceler is out the way, there's additional configuration needed for the containers to run properly, here's container specific configuration:
nginx
needs variables to wait for the php
container to be ready before starting, you can set the following:
FPM_HOST
: The host where the php-fpm service is runningFPM_PORT
: The port where the php-fpm service is running
worker
and scheduler
need the variables to wait for the php
container to be ready (and hence prepped the
environment):
PHP_HOST
: The host where the php-fpm service is runningPHP_PORT
: The port where the php-fpm service is running
There's a sqlite database being stored in the /var/www/html/storage
(for php
, worker
and scheduler
containers)
directory when running the service, you need to make sure that this directory is persistent across restarts,
otherwise you will lose reference to endpoints and snapshots (or files, if you're using the local disk).
An example of a deployment can be found under the deploy
directory, you can use the deploy/docker-compose.yml
file
to deploy the full stack. And replicate the same structure in your server.
If you're using rclone
to sync the repositories, you can add the configuration file to the container by adding
additional environment variables, documentation can be
found in the documentation.
To avoid any issues with the files served by the service, if you are operating with the files, you can put the service in maintenance mode, this will prevent any new requests from being processed and will return a 503 status code. Be aware that even crons and queues will stop working, to force queues you can resort to this command. To enable maintenance mode, you can use the following command:
php artisan down
To disable maintenance mode, you can use the following command:
php artisan up
Additional configuration can be provided to the application, such as automatic redirects or have a token that allows the access. Additional configuration can be found in documentation.
To add a repository, you need to enter to the php
container and run the following command:
php artisan repository:create
The command will guide you through the process of adding a repository, here's the fields that will be asked:
name
: name of the repository, will be used to identify the repository under the pathrepositories/{community|enterprise}/{repository_name}
,command
: the command the worker will run to sync the repository it can be anything available in the container. Save the content of the repository under the pathsource/{repository_name}
in the disk you're using. (e.g. if you're using the local disk, save the content of the repository understorage/app/source/{repository_name}
).rclone
binary is available in the container, to add configuration file follow the Additional Configuration section.source_folder
: if repository files are stored in a subfolder, you can specify it here, otherwise leave it empty.delay
: how many days the upstream files are delayed.
Once the repository is added, a sync job will be created and the worker will start syncing the repository.
To list all the repositories, you can use the php artisan repository:list
command.
php artisan repository:list
Repository syncs are dispatched by the scheduler daily. If you want to manually sync a repository, you can use the
php artisan repository:sync {repository_name}
command.
php artisan repository:sync {repository_name}
Freezing a repository will prevent the system to use the normal defer release using the snapshots. This won't halt
syncs, so you can freeze the repository to avoid a faulty package, you then unfreeze it to skip faulty packages.
To freeze a repository, you can use the php artisan repository:freeze {repository_name}
command.
php artisan repository:freeze {repository_name}
Advanced usage can be achieved by providing a custom path like so:
php artisan repository:freeze {repository_name} {path}
Please remember:
- The path is relative to the storage disk.
- The path won't be validated, so make sure it's correct.
To unfreeze a repository, you can use the php artisan repository:unfreeze {repository_name}
command.
php artisan repository:unfreeze {repository_name}
The system will automatically use the oldest snapshot available inside the delay
timer given for each repo. However,
if you need to push forward the release of some packages you can just delete the older snapshots through the filesystem
used. The system will automatically adapt to the new oldest snapshot available.
Since there's a possibility to have a remote disk as a repo, the following command will list the files in the directory
that is currently being served. The output is grep
friendly.
php artisan repository:files {repository_name}
You can specify the path to list the files in a specific directory.
php artisan repository:files {repository_name} {path}
This is useful when looking for a specific package that is causing issues like this:
php artisan repository:files {repository_name} . | grep {package_name}
Using .
as path everything on the storage will be listed, allowing to find for each snapshot the package that has been
released. Remember that source/{repository_name}
folder is always the latest sync, while snapshots/{repository_name}
are the daily syncs.
To list all the snapshots of a repository, you can use the php artisan snapshot:list {repository_name}
command, you'll
be provided the folder that are snapshotted and which one is currently being served.
php artisan repository:snapshots {repository_name}
A Milestone Release is a process that wipes all the snapshots of a repository and then creates one with the latest sync. This is useful when you want to release a new version of a repository, or when you want to force the release of a specific set of packages.
To trigger a milestone release, this can be done by both of the following:
- CLI
php artisan repository:milestone {repository_name}
- CURL
Additional authentication must be provided, the token is set in the .env
file under the REPOSITORY_MILESTONE_TOKEN
.
curl -X POST -H Accept:application/json -H Authorization:Bearer <token> <url>/repository/<repository_name>/milestone
The following list is a guide on how to handle the distribution of faulty packages, to find which of the snapshots has a faulty package go to the list repository files section.
-
Community repository has a faulty package: Try to fix the issue before the daily sync happens, otherwise follow through in the next steps.
-
Faulty package is in the snapshot
X
: You can safely delete the faulty snapshot, Parceler will use the oldest snapshot available inside thedelay
timer given for each repo. -
Faulty package is in snapshot
X
, but community has a fix: You can manually sync the repository to get the latest snapshot (if you don't have it already), then delete all the snapshots with the faulty package, Parceler will have the same behaviour as in the previous step. -
Faulty package is in snapshot
X
, with no fix available: You can freeze the repository to avoid the faulty package, then when a fix is eventually released you can unfreeze the repository to skip the faulty package.
You can check the timestamp of the latest snapshot for each repository, otherwise you can check the worker
container
logs.