Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Playbook for groups_relay #138

Merged
merged 3 commits into from
Dec 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions inventories/groups_relay/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# groups_relay Inventory

Hosts running our Nip 29 relay. Meant to be used with the `groups_relay` role.
Empty file.
19 changes: 19 additions & 0 deletions inventories/groups_relay/inventory.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
groups_relay:
hosts:
communities.nos.social:
groups_relay_image_tag: stable
communities2.nos.social:
groups_relay_image_tag: latest
vars:
admin_username: admin
ansible_user: '{{ admin_username }}'
homedir: /home/{{ admin_username }}
cert_email: [email protected]
domain: '{{ inventory_hostname }}'
groups_relay_image: ghcr.io/verse-pbc/groups_relay
groups_relay_health_endpoint: https://{{ inventory_hostname }}/health
prod:
hosts:
communities.nos.social:
communities2.nos.social:
22 changes: 22 additions & 0 deletions new-server-vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -219,3 +219,25 @@
# inv: relay
# inv_groups:
# - relay

#-----------------------------
# Groups Relay example
# ansible-playbook -i inventories/groups_relay/inventory.yml playbooks/new-do-droplet.yml -e '@new-server-vars.yml'
#-----------------------------
domain: communities2.nos.social
do_droplet_size: s-1vcpu-1gb
do_droplet_image: ubuntu-22-04-x64
do_droplet_region: NYC3
do_droplet_project: Nos
do_droplet_tags:
- prod
gh_user_keys_to_add:
- mplorentz
- dcadenas
- nbenmoody
inv: groups_relay
inv_groups:
- groups_relay
- prod
additional_roles:
- groups_relay
8 changes: 8 additions & 0 deletions playbooks/groups_relay.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
- name: Install new server for groups_relay
hosts: groups_relay:&prod
vars:
ansible_user: admin
domain: "{{ inventory_hostname }}"
roles:
- groups_relay

31 changes: 31 additions & 0 deletions roles/groups_relay/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# groups_relay role

This role sets up a Nostr groups relay server using strfry as the backend relay. It's designed to handle NIP-29 group messages.

## Architecture

The role deploys two main services:
1. `groups_relay` - A specialized relay that handles NIP-29 group messages
2. `strfry` - A lightweight Nostr relay that serves as the backend storage

## Variables

| Variable | Example | Purpose |
|-----------------------------|--------------------------------------------|--------------------------------------------|
| domain | communities.nos.social | The FQDN of the service |
| cert_email | [email protected] | The email used for LetsEncrypt certificate |
| groups_relay_image | ghcr.io/verse-pbc/groups_relay | The Docker image name |
| groups_relay_image_tag | stable | The Docker image tag |
| groups_relay_health_endpoint| https://{{ inventory_hostname }}/health | Health check endpoint |

## Dependencies

The role depends on:
- common
- digital-ocean
- docker
- traefik

## Network Configuration

The service exposes the groups relay on port 8080 through Traefik, while strfry runs internally and is not exposed to the internet. All traffic is routed through the `proxy` network managed by Traefik.
Empty file.
8 changes: 8 additions & 0 deletions roles/groups_relay/files/settings.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Default relay configuration
relay:
# Relay secret key (hex format)
# This is a test key, replace with your own in settings.local.yml
relay_secret_key: "6b911fd37cdf5c81d4c0adb1ab7fa822ed253ab0ad9aa18d77257c88b29b718e"
local_addr: "0.0.0.0:8080"
auth_url: "ws://localhost:8080"
relay_url: "ws://localhost:8080"
144 changes: 144 additions & 0 deletions roles/groups_relay/files/strfry.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
##
## Default strfry config
##

# Directory that contains the strfry LMDB database (restart required)
db = "/strfry/data"

dbParams {
# Maximum number of threads/processes that can simultaneously have LMDB transactions open (restart required)
maxreaders = 256

# Size of mmap() to use when loading LMDB (default is 10TB, does *not* correspond to disk-space used) (restart required)
mapsize = 10995116277760

# Disables read-ahead when accessing the LMDB mapping. Reduces IO activity when DB size is larger than RAM. (restart required)
noReadAhead = false
}

events {
# Maximum size of normalised JSON, in bytes
maxEventSize = 65536

# Events newer than this will be rejected
rejectEventsNewerThanSeconds = 900

# Events older than this will be rejected
rejectEventsOlderThanSeconds = 94608000

# Ephemeral events older than this will be rejected
rejectEphemeralEventsOlderThanSeconds = 60

# Ephemeral events will be deleted from the DB when older than this
ephemeralEventsLifetimeSeconds = 300

# Maximum number of tags allowed
maxNumTags = 2000

# Maximum size for tag values, in bytes
maxTagValSize = 1024
}

relay {
# Interface to listen on. Use 0.0.0.0 to listen on all interfaces (restart required)
bind = "0.0.0.0"

# Port to open for the nostr websocket protocol (restart required)
port = 7777

# Set OS-limit on maximum number of open files/sockets (if 0, don't attempt to set) (restart required)
nofiles = 1000000

# HTTP header that contains the client's real IP, before reverse proxying (ie x-real-ip) (MUST be all lower-case)
realIpHeader = ""

info {
# NIP-11: Name of this server. Short/descriptive (< 30 characters)
name = "strfry default"

# NIP-11: Detailed information about relay, free-form
description = "This is a strfry instance."

# NIP-11: Administrative nostr pubkey, for contact purposes
pubkey = ""

# NIP-11: Alternative administrative contact (email, website, etc)
contact = ""

# NIP-11: URL pointing to an image to be used as an icon for the relay
icon = ""

# List of supported lists as JSON array, or empty string to use default. Example: "[1,2]"
nips = ""
}

# Maximum accepted incoming websocket frame size (should be larger than max event) (restart required)
maxWebsocketPayloadSize = 131072

# Websocket-level PING message frequency (should be less than any reverse proxy idle timeouts) (restart required)
autoPingSeconds = 55

# If TCP keep-alive should be enabled (detect dropped connections to upstream reverse proxy)
enableTcpKeepalive = false

# How much uninterrupted CPU time a REQ query should get during its DB scan
queryTimesliceBudgetMicroseconds = 10000

# Maximum records that can be returned per filter
maxFilterLimit = 500

# Maximum number of subscriptions (concurrent REQs) a connection can have open at any time
maxSubsPerConnection = 20

writePolicy {
# If non-empty, path to an executable script that implements the writePolicy plugin logic
plugin = ""
}

compression {
# Use permessage-deflate compression if supported by client. Reduces bandwidth, but slight increase in CPU (restart required)
enabled = true

# Maintain a sliding window buffer for each connection. Improves compression, but uses more memory (restart required)
slidingWindow = true
}

logging {
# Dump all incoming messages
dumpInAll = false

# Dump all incoming EVENT messages
dumpInEvents = false

# Dump all incoming REQ/CLOSE messages
dumpInReqs = false

# Log performance metrics for initial REQ database scans
dbScanPerf = false

# Log reason for invalid event rejection? Can be disabled to silence excessive logging
invalidEvents = true
}

numThreads {
# Ingester threads: route incoming requests, validate events/sigs (restart required)
ingester = 3

# reqWorker threads: Handle initial DB scan for events (restart required)
reqWorker = 3

# reqMonitor threads: Handle filtering of new events (restart required)
reqMonitor = 3

# negentropy threads: Handle negentropy protocol messages (restart required)
negentropy = 2
}

negentropy {
# Support negentropy protocol messages
enabled = true

# Maximum records that sync will process before returning an error
maxSyncEvents = 1000000
}
}
6 changes: 6 additions & 0 deletions roles/groups_relay/meta/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
dependencies:
- role: common
- role: digital-ocean
- role: docker
- role: traefik
86 changes: 86 additions & 0 deletions roles/groups_relay/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
---
- name: Set groups_relay dir
ansible.builtin.set_fact:
groups_relay_dir: "{{ homedir }}/services/groups_relay"

- name: Ensure services/groups_relay exists
ansible.builtin.file:
path: "{{ groups_relay_dir }}"
state: directory
mode: '0755'

- name: Copy necessary template files to groups_relay dir
ansible.builtin.template:
src: "docker-compose.yml.tpl"
dest: "{{ groups_relay_dir }}/docker-compose.yml"
mode: 0644

- name: UFW - Allow http/https/strfry connections
become: true
community.general.ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop:
- "80"
- "443"
- "7777"

- name: Ensure cert directory exist
ansible.builtin.file:
path: "{{ groups_relay_dir }}/certs"
state: directory
mode: '0755'

- name: Ensure config directory exist
ansible.builtin.file:
path: "{{ groups_relay_dir }}/config"
state: directory
mode: '0755'

- name: Copy settings.production.yml to config dir
ansible.builtin.template:
src: settings.production.yml.tpl
dest: "{{ groups_relay_dir }}/config/settings.production.yml"
mode: '0644'

- name: Copy strfry.conf to config dir
ansible.builtin.copy:
src: strfry.conf
dest: "{{ groups_relay_dir }}/config/strfry.conf"
mode: '0644'

- name: Copy settings.yml to config dir
ansible.builtin.copy:
src: settings.yml
dest: "{{ groups_relay_dir }}/config/settings.yml"
mode: '0644'

- name: ensure docker is running
ansible.builtin.service:
name: docker
state: started

- name: Start up docker services
ansible.builtin.shell: "docker compose down && docker compose up -d"
args:
chdir: "{{ groups_relay_dir }}"
register: service_started
retries: 5
until: service_started is success

- name: Setup the image updater
ansible.builtin.include_role:
name: image-update-service
vars:
service_name: groups_relay
service_image: "{{ groups_relay_image }}"
service_image_tag: "{{ groups_relay_image_tag }}"
frequency: 3m
working_dir: "{{ groups_relay_dir }}"

- name: Setup the health check
ansible.builtin.include_role:
name: health-check
vars:
health_endpoint: "{{ groups_relay_health_endpoint }}"
29 changes: 29 additions & 0 deletions roles/groups_relay/templates/docker-compose.yml.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
services:
groups_relay:
container_name: groups_relay
image: "{{ groups_relay_image }}:{{ groups_relay_image_tag }}"
platform: linux/amd64
volumes:
- ./config:/app/config:ro
- db-data:/db/data
environment:
RUST_LOG: info
RUST_BACKTRACE: 1
NIP29__ENVIRONMENT: production
ports:
- "8080:8080"
labels:
- "traefik.enable=true"
- "traefik.http.routers.groups_relay.rule=Host(`{{ domain }}`)"
- "traefik.http.routers.groups_relay.entrypoints=websecure"
- "traefik.http.services.groups_relay.loadbalancer.server.port=8080"
restart: always
networks:
- proxy

volumes:
db-data:

networks:
proxy:
external: true
3 changes: 3 additions & 0 deletions roles/groups_relay/templates/settings.production.yml.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
relay:
auth_url: "wss://{{ inventory_hostname }}"
db_path: "/db/data"