-
Notifications
You must be signed in to change notification settings - Fork 2
Typing rules in Owl and Verus to support side‐channel security
Similar to our approach to functional correctness, we provide side-channel protection in two steps. We use Owl to prove that only public values can be leaked via side-channels; then, we use Verus to prove that the implementation preserves the side-channel leakages of the implementation (that is, any leakage in the implementation is also present or valid at the source level). We choose this approach because Owl already has very fine-grained information about whether different values are secret or public; furthermore, Verus cannot reason about hyperproperties or directly perform information-flow analysis on the generated code.
In Verus, we use the Rust type system to enforce side-channel security. We wrap secret values in an opaque SecretBuf
type, whose interface only allows operations with constant-time implementations (currently there are no such operations needed). Any other operations are protected by declassification tokens, which use machinery similar to the ITree tokens to prove that any declassification operation at the Verus level is allowed according to Owl.
Whenever some Verus operation has some secret arguments and returns a non-secret value (even Option<Secret>
), it is a declassifying operation and must be protected by a declassification token. Effectively, any flow from secret to public in Verus must be protected by a declassification token. Every declassification token must correspond to a declassification in Owl (via some crypto primitive) or an IFC proof from the Owl typechecker that the value being declassified is already public.
TODO: add more details about declassification tokens and declassifying operations here.
This is the "easy" case. These are needed for control flow. For example, a protocol might do the following:
name blah : nonce @ alice
corr [blah] ==> adv // blah is a public nonce
name k_blah : enckey Name(blah) @ alice, bob
def bob_main() @ bob : Unit =
input i in
let v_opt = dec(get(k_blah), blah) in
case v_opt {
| Some v => {
// here, we expect v : blah
output v
}
| None => ...
}
...
Here, v
should contain the value of blah
, but blah
is a public nonce, so it should be allowed to output v
. However, the type signature of dec
in the Verus implementation is (key: SecretBuf, ctxt: PublicBuf) -> Option<SecretBuf>
, so v
would have type SecretBuf
in the implementation.
On the Owl side, the typing rule for output v
will prove that v
flows to adv
. Therefore, during extraction, we have an IFC proof that v
is safe to output; it is therefore sound for the compiler to insert declassify(v)
before the output, to generate the corresponding declassification token.
TODO: everywhere the compiler inserts a declassification of this kind, the typechecker must have done a corresponding IFC proof. We need to check this in some way---currently by careful manual audit, but maybe there's a way to connect the two more directly.
This is the harder case. Even though the crypto operations have constant-time implementations, they do declassify values, since they can have public outputs that depend on secret inputs. Consider the cases of encryption and decryption:
- Encryption declassifies because it outputs a public buffer
- Decryption declassifies because it outputs a public option type
In the compiler, we must conservatively assume that any public buffer will be leaked via a side-channel. The ITree machinery cannot stop a malicious compiler from inserting the following code:
for i in 0..public_buf.len() {
if public_buf[i] == [0] {
slow();
} else {
fast();
}
}
Thus, any value that has type PublicBuf
in Verus must truly be public according to Owl: the corresponding Owl term must have type Data<adv>
(or some other type that flows to adv
).
Now, the type signature of enc
in Verus is (key: SecretBuf, plaintext: SecretBuf) -> PublicBuf
(ignoring nonces etc for now). According to our rule about secret inputs and public outputs, enc
needs to be gated by a declassification token. This is fine, we can add such a token in the compiler. The difficulty arises because Owl currently does not guarantee that the output of enc
is always Data<adv>
. Consider the following case:
name m: nonce
name m': nonce
name k': enckey Name(m')
let ctxt = enc(get(k'), get(m)) in ...
// ctxt has type Data<[m] /\ [m']>
If either m
or m'
does not flow to adv
, then ctxt
does not flow to adv
. So it is not safe to leak this value to the adversary. Since every PublicBuf
in the implementation can leak, we cannot allow ctxt
to have type PublicBuf
. Another way to see this is that we do not have an IFC proof that ctxt
is public, so we cannot declassify it. But the encryption operation is intrinsically declassifying.
To summarize:
- Putting the "wrong" arguments into enc or dec will declassify the wrong values.
- We need to prevent Rust from observing this---use declassification tokens.
- We need to prevent the source from doing insecure declassifications, so that the generated tokens are sound.
- We need to guarantee that the result of a declassifying crypto operation is indeed declassifiable.
Fortunately, protocols like the above are "bad", in the sense that the value ctxt
with type Data<[m] /\ [m']>
is generally not useful. Indeed, we generated it by using the wrong key to encrypt the message. So we can just ask the Owl typechecker to disallow these cases---they can create soundness issues in extraction, and may not be useful in protocol modeling.
For encryption and decryption, we will have two cases for the typing rule for each primitive.
For encryption, we have the following two cases for typing enc(k, m)
:
-
Fully well-typed case:
exists t. k: enckey(t) and m: t --> enc(k, m): Data<adv>
-
Fully public case:
k : Data<adv> and m : Data<adv> --> enc(k, m): Data<adv>
For decryption, we have the following two cases for typing dec(k, c)
:
-
Fully well-typed case:
exists t. k: enckey(t) and ctxt: Data<adv> --> dec(k, c): Option<t>
-
Fully public case:
k: Data<adv> and ctxt: Data<adv> --> dec(k, c): Option<Data<adv>>
TODO: is there a general scheme we can come up with for restricting the typing rules for all the crypto primitives? This will help us with extensible crypto in Owl as well.