Skip to content

Latest commit

 

History

History
24 lines (15 loc) · 1.36 KB

README.md

File metadata and controls

24 lines (15 loc) · 1.36 KB

Machine learning (ML) inference at the edge

It's here!

This targets wasm32-wasi for Fastly's Compute@Edge. It uses an extrinsic tool, wasm-opt, to squeeze - among other things – an ML inference engine into a 35MB-ish wasm binary.

This demo showcases image classification using a top-tier MobileNetV2 checkpoint. Owing to the flexibility of tract under the hood, the TensorFlow Lite model deployed can be swapped for another, including open interchange formats (ONNX / NNEF).

This demo was created to push the boundaries of the platform and inspire new ideas.

Publishing end-to-end

Using the Fastly CLI, publish the root package and note the [funky-domain].edgecompute.app:

fastly compute publish

Update L54 in docs/script.js to [funky-domain].edgecompute.app you just noted, and publish the static demo site separately:

cd static-host
fastly compute publish