Run ONNX models in the browser with WebNN. The developer preview unlocks interactive ML on the web that benefits from reduced latency, enhanced privacy and security, and GPU acceleration from DirectML.
WebNN Developer Preview website.
NOTE: Currently, the supported platforms are Edge/Chromium (support for other platforms is coming soon).
The website provides four scenarios based on different ONNX pre-trained deep learning models.
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
This Stable Diffusion 1.5 model has been optimized to work with WebNN. This model is licensed under the CreativeML Open RAIL-M license. For terms of use, please visit here. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
SD-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation. In the demo, you can generate an image in 2s on AI PC devices by leveraging WebNN API, a dedicated low-level API for neural network inference hardware acceleration.
This Stable Diffusion Turbo model has been optimized to work with WebNN. This model is licensed under the STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE AGREEMENT. For terms of use, please visit the Acceptable Use Policy. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
Segment Anything is a new AI model from Meta AI that can "cut out" any object. In the demo, you can segment any object from your uploaded images.
This Segment Anything Model has been optimized to work with WebNN. This model is licensed under the Apache-2.0 License. For terms of use, please visit the Code of Conduct. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
Whisper Base is a pre-trained model for automatic speech recognition (ASR) and speech translation. In the demo, you can experience the speech to text feature by using on-device inference powered by WebNN API and DirectML, especially the NPU acceleration.
This Whisper-base model has been optimized to work with WebNN. This model is licensed under the Apache-2.0 license. For terms of use, please visit the Intended use. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
MobileNet and ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes.
cd webnn-developer-preview
npm install
WebNN installation guides
WebNN requires a compatible browser to run, and Windows* 11 v21H2 (DML 1.6.0) or higher.
- Download the latest Microsoft Edge Canary browser
- To enable WebNN, in your browser address bar, enter
about://flags
, and then pressEnter
. An Experiments page opens - In the Search flags box, enter
webnn
. Enables WebNN API appears - In the drop-down menu, select
Enabled
- Relaunch your browser
Run the website in localhost
npm run dev
This will start a dev server and run WebNN Developer Preview demos with the WebNN enabled browser on your localhost.
Please run the following command to download demos required models if you run the demos the first time. You can also modify fetch_models.js to add network proxy configuration when needed.
npm run fetch-models
WebNN is a living specification and still subject to breaking changes, which may impact the samples depending on your browser version. The following are recent:
- 2024-10-29 Convert MLOperand methods into readonly attributes - spec change, Chromium change
- 2024-09-30 (pending breaking change) Replace MLContext.compute() with MLContext.dispatch() - spec change, Chromium change, ORT change, sample change
- 2024-09-24 Make MLOperandDescriptor.shape a required dictionary member - spec change, Chromium change
- 2024-09-17 Rename MLOperandDescriptor's "dimensions" key to "shape" - spec change, Chromium change, Chromium change 2
- 2024-07-24
MLContextOptions::MLPowerPreference
renameauto
todefault
- Chromium change - 2024-07-24 Allow
MLGraphBuilder.build()
to be called only once - spec change, Chromium change, ORT change, sample change - 2024-07-22
LSTM
/GRU
activation enumMLRecurrentNetworkActivation
- spec change, Chromium change - 2024-07-22
argMin
/argMax
change to take scalaraxis
parameter - spec change, Chromium change - 2024-07-15
argMin
/argMax
addoutputDataType
parameter - spec change, Chromium change, sample change - 2024-06-12
softmax
axis argument - spec change, Chromium change - 2024-06-07 Remove incompatible
MLActivations
for recurrent ops spec change, Chromium change, baseline change
Model | Known compatible Chromium version |
---|---|
Segment Anything | 129.0.6617.0 |
Stable Diffusion Turbo | 129.0.6617.0 |
Stable Diffusion 1.5 | 129.0.6617.0 |
Whisper Base | 129.0.6617.0 |
ResNet50 | 129.0.6617.0 |
MobileNet V2 | 129.0.6617.0 |
EfficientNet Lite4 | 129.0.6617.0 |
You can check the version via "about://version" in the address bar. In Chrome, look for "Google Chrome". In Edge, heed the "Chromium version".
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.