-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broadcast vanish requests #14
Conversation
try { | ||
await REDIS.xadd(STREAM_KEY, "*", event); | ||
} catch (error) { | ||
log(`Failed to push request ${event.id} to Redis Stream: ${error}`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how can we make sure these sorts of errors blow up in our face? Should we write a Grafana alert? Wire this up to Sentry? Get a log container to watch the logs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember we had talked about some ideas to have detailed metrics for the relay. I'd first ask if @gergelypolonkai knows about this because I recall some comment related to a service that inspects logs, that could be useful for other things.
One alternative we have is leveraging the sidecar service that will run next to the relay that I have almost ready to go. Right now it's just a service that listens to the Redis stream and performs the deletions. Checking this stream is an indirect way of knowing if all is good. We could run an http service inside it and expose a /metrics endpoint. I assume we could even periodically tail the strfry logs and grep whatever we want but for that the first alternative may be better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could prioritize having a Loki instance on the metrics server that collects logs; after that we can indeed have a Grafana alert that inspects such logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think it's time. I will file a ticket to tag this onto the end of the account deletion work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -53,6 +54,9 @@ const policies = [ | |||
whitelist: [localhost, eventsIp], | |||
}, | |||
], | |||
|
|||
// Broadcast vanish requests to Redis | |||
broadcastVanishRequests, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great if we could somehow write a test to verify that this is run after signature verification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mplorentz the verification happens in the strfry ingester, then the plugins are run before passing to the writer. I'm asking in the telegram channel just to be sure. But all we see in any plugin is already valid and their signatures are verified.
Doing a manual check to protect against any weird bug or future changes wouldn't hurt though but I'd be very surprised if that happens and it would be a major bug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, sounds like we don't need an automated test. It seemed possible to me that some folks might be running some spam filters before signature verification as an optimization. But if that's not even possible in strfry with the plugin system then that's good enough for me 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Delete from vanish request
The tests are now run during push and can already be seen in the current branch https://github.com/planetary-social/nosrelay/actions/runs/11220498379/job/31188797392?pr=14 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even though run_tests.sh
is pretty obvious, I think it would be nice to have a section on how to run the tests locally in the README.md.
I didn't read the tests very carefully or try running any of this, but I'm good with merging it all the same 👍 I can see you put a lot of effort into the automated tests. Thanks for doing that.
push: | ||
branches: ['main'] | ||
pull_request: | ||
branches: ['main'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't want to publish new docker images on every pull request commit, or do we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to, because the integration tests run the full code as compose.yml describes. This way we can have the sh script hit all the complete services using even external tools like nak etc
// Script to manually push a vanish request to the vanish_requests stream in | ||
// Redis. Using deno to avoid any discrepancy vs the strfry policy. The script | ||
// pushed an unsigned request so that we can do it based on out of band | ||
// requests, not using the nostr network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add example usage to this comment?
This is part of planetary-social/infrastructure#139.
Adds a Strfry policy to send vanish request events to a Redis stream for processing by other services. Streams are used instead of pubsub because pubsub subscriptions don’t retain messages for disconnected subscribers, whereas streams provide persistence for this use case. We could trim the stream at some point but NIP-62 if we don't do so, we have the optional bookkeeping for free.
There's a related Ansible PR to pass the Redis connection string and relay URL correctly.
Next step: create a service to read events from the stream and delete corresponding events from the relay.
Pending: