Skip to content

Machine learning (ML) inference on Fastly's Compute@Edge

Notifications You must be signed in to change notification settings

doramatadora/edgeml

Repository files navigation

Machine learning (ML) inference at the edge

It's here!

This targets wasm32-wasi for Fastly's Compute@Edge. It uses an extrinsic tool, wasm-opt, to squeeze - among other things – an ML inference engine into a 35MB-ish wasm binary.

This demo showcases image classification using a top-tier MobileNetV2 checkpoint. Owing to the flexibility of tract under the hood, the TensorFlow Lite model deployed can be swapped for another, including open interchange formats (ONNX / NNEF).

This demo was created to push the boundaries of the platform and inspire new ideas.

Publishing end-to-end

Using the Fastly CLI, publish the root package and note the [funky-domain].edgecompute.app:

fastly compute publish

Update L54 in docs/script.js to [funky-domain].edgecompute.app you just noted, and publish the static demo site separately:

cd static-host
fastly compute publish

About

Machine learning (ML) inference on Fastly's Compute@Edge

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published