-
Notifications
You must be signed in to change notification settings - Fork 9
Workload (WASM binary) loading - connection termination #4
Comments
Thanks for writing this issue up so far. I think the implementation effort for the network connections is not the primary aspect to evaluate here. It seems that the biggest difference is the use of no session key in option 1, where the workload is directly encrypted towards the CPU/firmware of the chosen host system. Could any host which has the same CPU/firmware type which the client agent encrypted the workload for later decrypt the workload? This would be a risk if the encrypted workload is captured and could later be decrypted when the CPU/firmware suffers from a security breach. This risk could be avoided by using a session key. |
We're clear that the orchestrator should never be a middle-man. Any integration with the orchestrator should have the Enarx client agent being provided with workload, data, config, etc. by the orchestrator, and the client agent managing all communications with host agent.
There is always a session key constructed as part of the attestation step, and it's always unique to that TEE instance and therefore that Keep instance. This means that even Keeps on the same host will have different session keys. The SEV demo, for instance, shows perfect forward secrecy (PFS). |
I'm pretty sure I have a different understanding of what the orchestrator and the client agent are and do in the Enarx architecture. Is the client agent the CLI tool which is run by the tenant interactively, or is it a long-running Keep which acts on behalf of the tenant? |
If you look here, you'll find a set of slides. The slides in the section "Process Flow" give an indication of the various components. The Enarx client agent may be a daemon or a library (tbd), and is trusted by the tenant. Assuming that you're running it on a trusted machine, there's no need for it to be in a Keep (though that's possible, with appropriate boot-strapping). You may have a CLI or an orchestrator (Openshift, Openstack, etc.) talking to it. |
The boot-strapping case you mention is part of what I was wondering about. The other part is the fact that we'll probably have a CLI which is for controlling the orchestrator. |
@npmccallum notes: |
The tenant should always be the party to decide whether caching is turned on. Not all instances need to support caching. |
Written up as rfc draft. Under review: #2 |
Move to rfcs project? |
Indeed, moving this issue. |
Once a Keep is prepped, with microkernel, WASM runtime, etc., the workload (WASM binary) needs to be loaded into it. There are three ways this could be managed: which is our plan?
The text was updated successfully, but these errors were encountered: