Skip to content

Commit

Permalink
examples: upgrade the write-patterns merge logic.
Browse files Browse the repository at this point in the history
  • Loading branch information
thruflo committed Dec 6, 2024
1 parent 2b94254 commit bbea896
Show file tree
Hide file tree
Showing 9 changed files with 192 additions and 115 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,15 @@ Combining data on-read makes local reads slightly slower.

Writes are still made via an API. This can often be helpful and pragmatic, allowing you to [re-use your existing API](https://electric-sql.com/blog/2024/11/21/local-first-with-your-existing-api). However, you may want to avoid running an API and leverage [through the DB sync](../4-through-the-db) for a purer local-first approach.

## Implementation notes

The merge logic in the `matchWrite` function supports rebasing local optimistic state on concurrent updates from other users.

This differs from the previous optimistic state example, in that it matches inserts and updates on the `write_id`, rather than the `id`. This means that concurrent updates to the same row will not
clear the optimistic state, which allows it to be rebased on changes made concurrently to the same data by other users.

Note that we still match deletes by `id`, because delete operations can't update the `write_id` column. If you'd like to support revertable concurrent deletes, you can use soft deletes (which are obviously actually updates).

## How to run

See the [How to run](../../README.md#how-to-run) section in the example README.
73 changes: 48 additions & 25 deletions examples/write-patterns/patterns/3-shared-persistent/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ type PartialTodo = Partial<Todo> & {
id: string
}

type Write = {
key: string
type LocalWrite = {
id: string
operation: Operation
value: PartialTodo
}

// Define a shared, persistent, reactive store for local optimistic state.
const optimisticState = proxyMap<string, Write>(
const optimisticState = proxyMap<string, LocalWrite>(
JSON.parse(localStorage.getItem(KEY) || '[]')
)
subscribe(optimisticState, () => {
Expand All @@ -39,15 +39,16 @@ subscribe(optimisticState, () => {
/*
* Add a local write to the optimistic state
*/
function addLocalWrite(operation: Operation, value: PartialTodo): Write {
const key = uuidv4()
const write: Write = {
key,
function addLocalWrite(operation: Operation, value: PartialTodo): LocalWrite {
const id = uuidv4()

const write: LocalWrite = {
id,
operation,
value,
}

optimisticState.set(key, write)
optimisticState.set(id, write)

return write
}
Expand All @@ -56,29 +57,50 @@ function addLocalWrite(operation: Operation, value: PartialTodo): Write {
* Subscribe to the shape `stream` until the local write syncs back through it.
* At which point, delete the local write from the optimistic state.
*/
async function matchWrite(stream: ShapeStream<Todo>, write: Write) {
const { key, operation, value } = write
async function matchWrite(
stream: ShapeStream<Todo>,
write: LocalWrite
): Promise<void> {
const { operation, value } = write

const matchFn =
operation === 'delete'
? matchBy('id', value.id)
: matchBy('write_id', write.id)

try {
await matchStream(stream, [operation], matchBy('id', value.id))
await matchStream(stream, [operation], matchFn)
} catch (_err) {
return
}

optimisticState.delete(key)
optimisticState.delete(write.id)
}

/*
* Make an HTTP request to send the write to the API server.
* If the request fails, delete the local write from the optimistic state.
* If it succeeds, return the `txid` of the write from the response data.
*/
async function sendRequest(path: string, method: string, write: Write) {
const { key, value } = write
async function sendRequest(
path: string,
method: string,
{ id, value }: LocalWrite
): Promise<void> {
const data = {
...value,
write_id: id,
}

let response: Response | undefined
try {
await api.request(path, method, value)
} catch (_err) {
optimisticState.delete(key)
response = await api.request(path, method, data)
} catch (err) {
// ignore
}

if (response === undefined || !response.ok) {
optimisticState.delete(id)
}
}

Expand All @@ -95,15 +117,16 @@ export default function SharedPersistent() {
timestamptz: (value: string) => new Date(value),
},
})

const sorted = data ? data.sort((a, b) => +a.created_at - +b.created_at) : []

// Get the local optimistic state.
const writes = useSnapshot<Map<string, Write>>(optimisticState)
const localWrites = useSnapshot<Map<string, LocalWrite>>(optimisticState)

// Merge the synced state with the local state.
const todos = writes
const todos = localWrites
.values()
.reduce((synced: Todo[], { operation, value }: Write) => {
.reduce((synced: Todo[], { operation, value }: LocalWrite) => {
switch (operation) {
case 'insert':
return synced.some((todo) => todo.id === value.id)
Expand Down Expand Up @@ -140,7 +163,6 @@ export default function SharedPersistent() {

startTransition(async () => {
const write = addLocalWrite('insert', data)

const fetchPromise = sendRequest(path, 'POST', write)
const syncPromise = matchWrite(stream, write)

Expand All @@ -155,13 +177,12 @@ export default function SharedPersistent() {

const path = `/todos/${id}`
const data = {
id: id,
id,
completed: !completed,
}

startTransition(async () => {
const write = addLocalWrite('update', data)

const fetchPromise = sendRequest(path, 'PUT', write)
const syncPromise = matchWrite(stream, write)

Expand All @@ -175,10 +196,12 @@ export default function SharedPersistent() {
const { id } = todo

const path = `/todos/${id}`
const data = {
id,
}

startTransition(async () => {
const write = addLocalWrite('delete', { id })

const write = addLocalWrite('delete', data)
const fetchPromise = sendRequest(path, 'DELETE', write)
const syncPromise = matchWrite(stream, write)

Expand Down
18 changes: 5 additions & 13 deletions examples/write-patterns/patterns/4-through-the-db/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,23 +36,15 @@ Good use-cases include:

Using a local embedded database adds a relatively-heavy dependency to your app. The shadow table and trigger machinery complicate your client side schema definition.

## Complexities

### 1. Merge logic

The entrypoint in the code for merge logic is the very blunt `delete_local_on_synced_trigger` defined in the [`./local-schema.sql`](./local-schema.sql). The current implementation just wipes any local state for a row when any insert, updater or delete to that row syncs in from the server.

This approach works and is simple to reason about. However, it won't preserve local changes on top of concurrent changes by other users (or tabs or devices). More sophisticated implementations could do more sophisticated merge logic here. Such as rebasing the local changes on the new server state. This typically involved maintaining more bookkeeping info and having more complex triggers.

### 2. Rollbacks

Syncing changes in the background complicates any potential rollback handling. In the [shared persistent optimistic state](../../3-shared-persistent) pattern, you can detect a write being rejected by the server whilst still in context, handling user input. With through the database sync, this context is harder to reconstruct.

In this example implementation, we implement an extremely blunt rollback strategy of clearing all local state and writes in the event of any write being rejected by the server.
## Implementation notes

The merge logic in the `delete_local_on_synced_insert_and_update_trigger` in [`./local-schema.sql`](./local-schema.sql) supports rebasing local optimistic state on concurrent updates from other users.

You may want to implement a more nuanced strategy and, for example, provide information to the user about what is happening and / or minimise data loss by only clearing local-state that's causally dependent on a rejected write. This opens the door to a lot of complexity that may best be addressed by using an existing framework.
The rollback strategy in the `rollback` method of the `ChangeLogSynchronizer` in [`./sync.ts`](./sync.ts) is very naive: clearing all local state and writes in the event of any write being rejected by the server. You may want to implement a more nuanced strategy. For example, to provide information to the user about what is happening and / or minimise data loss by only clearing local-state that's causally dependent on a rejected write.

See the [Writes guide](https://electric-sql.com/docs/guides/writes) for more information and links to [existing frameworks](https://electric-sql.com/docs/guides/writes#tools).
This opens the door to a lot of complexity that may best be addressed by using an existing framework. See the [Writes guide](https://electric-sql.com/docs/guides/writes) for more information and links to [existing frameworks](https://electric-sql.com/docs/guides/writes#tools).

## How to run

Expand Down
Loading

0 comments on commit bbea896

Please sign in to comment.