Skip to content

Commit

Permalink
Merge pull request #4 from s12f/improve_sequencer
Browse files Browse the repository at this point in the history
  • Loading branch information
s12f authored Dec 11, 2024
2 parents ee4f917 + 7611729 commit 8add00b
Show file tree
Hide file tree
Showing 9 changed files with 179 additions and 80 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/tlc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
- percolator/Transfer.tla
- percolator/SnapshotRead.tla
- percolator/MC_SIB_SNAPSHOT.tla
- sequencer/Sequencer.tla
- sequencer/MCSequencer.tla
- simplified-fast-paxos/SimplifiedFastPaxos.tla
- isolation-models/ALL_TESTS.tla
steps:
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
.vim
.vscode
14 changes: 9 additions & 5 deletions isolation-models/readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Transaction Isolation Models

There are different isolation models to specify the database transaction levels:
There are different isolation models to specify the database transaction isolation levels:

+ ANSI: ANSI/ISO SQL-92 and [A Critique of ANSI SQL Isolation Levels](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf)
+ ADYA:
Expand All @@ -10,10 +10,14 @@ There are different isolation models to specify the database transaction levels:

## SIB

[SIB_ISOLATION.tla](./SIB_ISOLATION.tla) is the specification of SIB isolation model,
you can use the IsolationExecution-related(e.g. SerializableExecution) operations to verify a transaction execution(included an init state and a transaction sequence),
or use the Isolation-related operations(e.g. SerializableIsolation) to verify an init state and a set of finished transaction(committed or abort),
there is an example to verify that the Snapshot Isolation allows the Write Skew anomaly, but the Serializable Isolation Doesn't:
[SIB_ISOLATION.tla](./SIB_ISOLATION.tla) is the specification of SIB isolation
model, you can use the IsolationExecution-related(e.g. SerializableExecution)
operations to verify a transaction execution(included an init state and a
transaction sequence), or use the Isolation-related operations(e.g.
SerializableIsolation) to verify an init state and a set of finished
transaction(committed or abort), there is an example to verify that the
Snapshot Isolation allows the Write Skew anomaly, but the Serializable
Isolation Doesn't:
```tla
init ≜ [ k1 ↦ 0, k2 ↦ 0, k3 ↦ 0 ]
Expand Down
19 changes: 17 additions & 2 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,25 @@ Verify some old or new concepts/models/algorithms/ideas of distributed/discrete

What's Next:

- [x] Github CI
- [X] Github CI
- [ ] Separate model from specification
- [ ] Consider Removing Unicode
- [X] Modeling Transaction Isolation Levels
- [ ] Add Liveness Invariants
- [ ] Distributed Transaction On Sql Databases
- [ ] HLC and Quorum Intersection

## Q & A

#### Why are the TLA+ files unicode-based?

In short, Unicode-based has better readability rather than plain ASCII.

In the plain ASCII, many other projects will generate a styled pdf with the
TLA+ files for readability, there are some disadvantages:
+ pdf files don't work well with git
+ (pdf)Latex dependency
+ more efforts to sync the TLA+ files and pdf files

#### How do I input the unicode symbols?

Please read: https://github.com/tlaplus/tlaplus-standard/tree/main/unicode
5 changes: 5 additions & 0 deletions sequencer/MCSequencer.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
CONSTANTS
None = None
INVARIANT Inv
SPECIFICATION Spec
CONSTRAINT Constraint
18 changes: 18 additions & 0 deletions sequencer/MCSequencer.tla
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---- MODULE MCSequencer ----

EXTENDS TLC

CONSTANT None

VARIABLES states, history, storage, nextRid

Servers {"s1", "s2", "s3"}
BatchSize 3

INSTANCE Sequencer

Constraint
nextRid < 7
storage < 7

====
8 changes: 0 additions & 8 deletions sequencer/Sequencer.cfg

This file was deleted.

179 changes: 117 additions & 62 deletions sequencer/Sequencer.tla
Original file line number Diff line number Diff line change
Expand Up @@ -2,75 +2,130 @@

EXTENDS Integers, Sequences

CONSTANTS Servers, MaxEpoch, MaxSeq, BatchSize

VARIABLES states, epoch, output, diskSeq

PRIMARY "Primary"
SECONDARY "Secondary"

ServerState [ role: { PRIMARY }, seq: Nat ]
[ role: { SECONDARY } ]

TypeOK
epoch Nat
states [ Servers ServerState ]
output Seq(1 MaxSeq)
diskSeq Nat

Constraint
epoch (1 MaxEpoch)
output Seq(1 MaxSeq)
diskSeq 0MaxSeq
¬ s Servers: states[s].role = PRIMARY states[s].seq > MaxSeq
CONSTANTS Servers, BatchSize, None

\* Server States
VARIABLES states
\* history: Seq({Begin, End})
VARIABLES history
\* storage: Nat
VARIABLES storage
\* next request id
VARIABLES nextRid

Leader "Leader"
Backup "Backup"

Begin "Begin"
LP "LP"
End "End"

BackupServer
[ role Backup
, validLease FALSE
, nextSeq 0
, maxSeq 0
, in None
, out None
]

Init
states = [ s Servers [ role SECONDARY ] ]
epoch = 1
output =
diskSeq = 0

GetSeq(s)
states[s].role = PRIMARY
LET nextSeq states[s].seq
IN output' = Append(output, nextSeq) \* response seq to client
UNCHANGEDoutput \* failed response(e.g. msg loss, timeout, primary restart etc.)
states' = [ states EXCEPT ![s].seq = nextSeq + 1 ]
IF nextSeq < diskSeq
THEN UNCHANGED diskSeq
ELSE diskSeq' = nextSeq + BatchSize
UNCHANGED epoch

PrimaryRestart(s)
states[s].role = PRIMARY
states' = [states EXCEPT ![s] = [ role SECONDARY ]]
UNCHANGED epoch, output, diskSeq

Elect(s)
a Servers: states[a].role = SECONDARY
epoch' = epoch + 1
states' = [ states EXCEPT ![s] = [ role PRIMARY, seq diskSeq + 1 ] ]
UNCHANGED output, diskSeq
states = [ s Servers BackupServer ]
history =
storage = 0
nextRid = 1

HandleRequest(s)
states[s].role = Leader
states[s].in = None
states[s].nextSeq < states[s].maxSeq
states' = [ states EXCEPT ![s].in = nextRid ]
nextRid' = nextRid + 1
history' = Append(history, [ rid nextRid, typ Begin])
UNCHANGED storage

(*
This is the most important point of the algorithm,
the Server will always put request into inbox first(HandleRequest Action),
then check whether itself still has a valid lease(The Linearizable Point),
If it is, the sequence in the inbox can send to client safely.
*)
CheckRole(s)
states[s].role = Leader
states[s].in None
IF states[s].validLease
THEN states' = [ states EXCEPT
![s].out = [ rid states[s].in, seq states[s].nextSeq ],
![s].in = None,
![s].nextSeq = @ + 1
]
history' = Append(history, [ typ LP, rid states[s].in, seq states[s].nextSeq])
\* Drop all states
ELSE states' =[ states EXCEPT ![s] = BackupServer ]
UNCHANGED history
UNCHANGED storage, nextRid

RenewLease(s)
states[s].role = Leader
states[s].nextSeq = states[s].maxSeq
storage' = storage + BatchSize
states' = [ states EXCEPT ![s].maxSeq = storage + BatchSize ]
UNCHANGED history, nextRid

Response(s)
states[s].out None
states' = [ states EXCEPT ![s].out = None ]
history' = Append(history, [ rid states[s].out.rid, typ End, seq states[s].out.seq ])
UNCHANGED storage, nextRid

\* Loss all states
RestartLeader(s)
states[s].role = Leader
states' = [states EXCEPT ![s] = BackupServer ]
UNCHANGED history, storage, nextRid

\* All states of the leader are still in memory,
\* but the lease is invalid asynchronously(on clock).
LeaseTimeout(s)
states[s].validLease
states' = [ states EXCEPT ![s].validLease = FALSE ]
UNCHANGED history, storage, nextRid

\* compete on storage, winner become the leader
NewLeader(s)
\* Only if all server have a bounded clock skew(that is the assumption),
\* the condition can be implementable on the clocks.
a Servers: states[a].validLease = FALSE
storage' = storage + BatchSize
states' = [ states EXCEPT ![s] =
[ role Leader
, validLease TRUE
, nextSeq storage
, maxSeq storage + BatchSize
, in None
, out None]
]
UNCHANGED history, nextRid

Next s Servers:
GetSeq(s)
PrimaryRestart(s)
Elect(s)
HandleRequest(s)
CheckRole(s)
RenewLease(s)
Response(s)
RestartLeader(s)
LeaseTimeout(s)
NewLeader(s)

Spec Init [Next]_states, epoch, output, diskSeq
Spec Init [Next]_states, history, storage, nextRid

SafePrimary
x, y Servers:
states[x].role = PRIMARY states[y].role = PRIMARY x = y
----------------------------------------------------------------------------

SafeSeqs(s)
x, y 1Len(s):
x > y s[x] > s[y]
\* Verify Linearizability by Linearizable points.
LinearizableHistory
e1, e2 DOMAIN history:
history[e1].typ = LP history[e2].typ = LP e2 > e1
history[e2].seq > history[e1].seq

Inv
TypeOK
SafePrimary
SafeSeqs(output)
Inv LinearizableHistory

============================================================================

13 changes: 11 additions & 2 deletions sequencer/readme.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
## TODO
# Sequencer

- [ ] Improve: linearizable Point
This is a distributed sequence generator implement
by the assumption that all servers have a bounded clock skew.

Files:
+ [Sequencer.tla](./Sequencer.tla): the specification of the Sequencer
+ [MCSequencer.tla](./MCSequencer.tla): Model Checking for Sequencer

Features:
+ Linearizability
+ Cached sequences for better performance

0 comments on commit 8add00b

Please sign in to comment.