Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple instances running on same server interfere with each other #127

Open
lbdroid opened this issue Sep 9, 2021 · 6 comments · May be fixed by #202
Open

Multiple instances running on same server interfere with each other #127

lbdroid opened this issue Sep 9, 2021 · 6 comments · May be fixed by #202
Labels
bug Something isn't working

Comments

@lbdroid
Copy link

lbdroid commented Sep 9, 2021

Situation is this;

I have two instances of Nextcloud running on the same physical server on different domain names.
These two instances use a different redis database/index/prefix (or whatever you would like to call it, the dbindex parameter).
I have also set up two instances of notify_push listening on different ports on the same server.

Startup trace on the first instance;

[2021-09-09 10:37:54.083333 -04:00] TRACE [notify_push] src/main.rs:46: Running with config: Config { database: AnyConnectOptions(MySql(MySqlConnectOptions { host: "localhost", port: 3306, socket: None, username: "----", password: Some("----"), database: Some("nc1"), ssl_mode: Preferred, ssl_ca: None, statement_cache_capacity: 100, charset: "utf8mb4", collation: None, log_settings: LogSettings { statements_level: Info, slow_statements_level: Warn, slow_statements_duration: 1s } })), database_prefix: "oc_", redis: [ConnectionInfo { addr: Tcp("127.0.0.1", 6379), redis: RedisConnectionInfo { db: 0, username: None, password: None } }], nextcloud_url: "https://nc1.my.tld/", metrics_bind: None, log_level: "notify_push=trace", bind: Tcp(0.0.0.0:7867), allow_self_signed: false, no_ansi: false }
[2021-09-09 10:37:54.086174 -04:00] DEBUG [notify_push::storage_mapping] src/storage_mapping.rs:96: querying storage mapping for 1
[2021-09-09 10:37:54.154635 -04:00] TRACE [notify_push] src/main.rs:63: Listening on 0.0.0.0:7867

Startup trace on the second instance;

[2021-09-09 10:35:48.107783 -04:00] TRACE [notify_push] src/main.rs:46: Running with config: Config { database: AnyConnectOptions(MySql(MySqlConnectOptions { host: "localhost", port: 3306, socket: None, username: "----", password: Some("----"), database: Some("nc2"), ssl_mode: Preferred, ssl_ca: None, statement_cache_capacity: 100, charset: "utf8mb4", collation: None, log_settings: LogSettings { statements_level: Info, slow_statements_level: Warn, slow_statements_duration: 1s } })), database_prefix: "oc_", redis: [ConnectionInfo { addr: Tcp("127.0.0.1", 6379), redis: RedisConnectionInfo { db: 2, username: None, password: None } }], nextcloud_url: "https://nc2.different.tld/", metrics_bind: None, log_level: "notify_push=trace", bind: Tcp(0.0.0.0:7869), allow_self_signed: false, no_ansi: false }
[2021-09-09 10:35:48.111389 -04:00] DEBUG [notify_push::storage_mapping] src/storage_mapping.rs:96: querying storage mapping for 1
[2021-09-09 10:35:48.386990 -04:00] TRACE [notify_push] src/main.rs:63: Listening on 0.0.0.0:7869

As you can see, instances are listening on different ports, connecting to different nextcloud instances, and using different redis db's.

Apache config, INST 1:

        ProxyPass /push/ws ws://localhost:7867/ws
        ProxyPass /push/ http://localhost:7867/
        ProxyPassReverse /push/ http://localhost:7867/

Apache config, INST 2:

        ProxyPass /push/ws ws://localhost:7869/ws
        ProxyPass /push/ http://localhost:7869/
        ProxyPassReverse /push/ http://localhost:7869/

When a client connects, it connects to the correct instance of notify_push, and it proceeds to regularly ping the users connected to that instance.

The PROBLEM, however, is that when a message is generated by EITHER instance of Nextcloud, it ends up being received by BOTH instances of notify_push. This is especially a problem when there is username overlap between the two Nextcloud instances, because when a notification is meant for {username}/nc1, it is delivered to both {username}/nc1 and {username}/nc2.

@icewind1991
Copy link
Member

are both servers using the same redis instance?

@lbdroid
Copy link
Author

lbdroid commented Sep 9, 2021

Its the same instance with different database/prefix/index.
One is using redis database prefix 0, and the other is using 2.

According to the trace log, they're both picking up the correct prefix;

redis: RedisConnectionInfo { db: 0, username: None, password: None } }]
redis: RedisConnectionInfo { db: 2, username: None, password: None } }]

@lbdroid
Copy link
Author

lbdroid commented Sep 9, 2021

According to redis-cli monitor, messages are being sent from Nextcloud to the appropriate prefix;
(note the "0" or "2" at the start of the square bracket)

1631202885.069967 [0 127.0.0.1:44532] "PUBLISH" "notify_notification" "{\"user\":\"nc1user\"}"
1631203209.068876 [2 127.0.0.1:54300] "PUBLISH" "notify_notification" "{\"user\":\"nc2user\"}"

@lbdroid
Copy link
Author

lbdroid commented Sep 9, 2021

Looks like redis pub/sub doesn't deal with the database id, rather the pub/sub channels need to be prefixed in order to be distinguished. That means that instead of publishing to "notify_notification", it would be more suitable to publish to "0_notify_notification" or "2_notify_notification", or something distinct from the database id such as NEXTCLOUD_URL.

@lbdroid
Copy link
Author

lbdroid commented Jun 10, 2022

@icewind1991 - anything on the horizon for this issue?

icewind1991 added a commit that referenced this issue Dec 28, 2022
fixes #127

Signed-off-by: Robin Appelman <[email protected]>
@icewind1991 icewind1991 linked a pull request Dec 28, 2022 that will close this issue
icewind1991 added a commit that referenced this issue Jan 23, 2023
fixes #127

Signed-off-by: Robin Appelman <[email protected]>
@joshtrichards
Copy link
Member

Looks like there is an associated PR (#202) but it's unmerged. Might help if someone feels up to testing it.

@joshtrichards joshtrichards added the bug Something isn't working label Oct 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants