-
Notifications
You must be signed in to change notification settings - Fork 464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What are the strategies to sync with remote storage? #510
Comments
Hi @bokolob, one approach is to store individual changes (as binary blobs) as they are generated, and from time to time replace the individual changes with the result of calling |
Hi, does it mean that there should be a special server process which saves data? But if such process dies, we will need to get lost changes. Am I right? Do I over complicate system? ;) |
Sorry, I don't follow what you're asking for. Can you give some more detail? Generally, like with most database-backed apps, you can store your persistent state in the DB, and the app code can load it from the DB when needed. |
Ok, I will try. I need to have last version of json in database. While two clients are editing document, third should be able to read last version. First possibility is save to db by each client. Every client should read document from db, apply new changes or merge new document and save back to db. Or replace the whole document with it's own version. If two or more clients do that they will overwrite documents after each other. I’m not sure that it’s a problem, but if it is, we will need a lock. Another drawback - multiple write requests to db. Another possibility is a server process which connects as client, get all updates and syncs his document with db. We don’t need any locks here, but what will happen if that process die? Maybe we should use one such process on each hardware node. In any case we need some control over that processes. |
Hey, tell me if I ask something weird :) |
Both possibilities you mention seem like reasonable designs. If multiple clients write to the same DB, I think some form of locking (or atomic compare-and-set) will be necessary to avoid losing updates, though you can also store a log of changes in the DB, which would be append-only and hence not have problems with concurrent overwrites. But the best way of using Automerge is in a local-first style, where each client has its own database, and the server has its own database, and they sync with each other using the Automerge sync protocol. |
I'm complete newbie, just desiging a collaboration system.
My current scheme looks like - multiple clients connected through socketio to rabbitmq broker, which routes messages between them. When one client have some changes, it will send them through socketio to all who needs.
Ok, but when should I sync it with database?
Simpliest way - sync all changes, but it will be CPU consuming. I have some other ideas, but don't like them :)
The text was updated successfully, but these errors were encountered: