-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Building wasm to run in AudioWorklet environment which is controlled by JS #20581
Comments
cc @juj for thoughts.
I think we'll keep supporting that for the foreseeable future. It is important to be able to test in JS environments like the V8 and SpiderMonkey shells, as the very latest engine features are there. |
I think it might be better to extend the @kripken I believe the problem with |
This captures the problem well |
The needed changes could be expanded to a If you want to contribute a test for that, I don't see why it couldn't be added as a tested configuration in the CI. |
Alternatively, we could take the command line |
Do we really want to support yet another build mode though? @juj did you think it makes sense to support running in audio worklets without |
Originally I did not anticipate Audio Worklets without Shared Memory (i.e. My impression of what they mean by that is that these developers would like to use Emscripten only to write the Audio Worklet audio processor node code in C/C++ for that better Wasm performance than JS provides, and otherwise they would not be using Emscripten/Wasm at all for any other parts of their web page. (this is a bit of a guess at this point, I am not 100% sure) I.e. the main page would be in JS, they are not using Emscripten to develop their page, and all the Emscripten generated code would only run inside an Audio Worklet. And then their own web site manages the message passing between the worklet and their main page. This would not necessarily be a full new configuration to test and maintain, since large parts of it would be shared by the existing AUDIO_WORKLET code. That is, we already must support loading the generated .js inside a Wasm Audio Worklet scope, so this kind of I think it probably would work out already in existing codegen by running that So if that is useful to some use cases, I don't see why we couldn't drop a test for that scenario in the browser suite. Probably will need someone to champion an example of a compelling use case of how exactly they would like their site integration with Emscripten worklets in such a scenario to work. |
Sounds reasonable to me. Would anyone like to volunteer to create simple / minimal test case for this in the browser test suite? I think |
Are you looking for something different than, e.g., this? What I'm doing right now is essentially to build with |
Using https://github.com/GoogleChromeLabs/web-audio-samples/tree/main/src/audio-worklet/design-pattern/wasm/ sounds reasonable to me. |
The main thing is the keep the test as simple as possible. |
Thanks, this would be a good start. I posted a draft PR at #20630 that shows how that code could be integrated into the interactive browser test suite in Emscripten, and the existing It is not complete, and I am afraid I might not have the time to finish working on this, but I wanted to poke into this mainly to see how well it could integrate. I would be curious to know how many people are more interested in using such a |
A few folks who are interested in this topic just had a discussion: Also Re:juj@
Yes. Many existing audio apps are using vanilla JS for their UI/control logic. Using a compiled WASM blob for sound synthesis/processing on the WebAudio render thread is definitely an established pattern. |
I don't have expertise on this project, but would like to support/help. I glanced the change and am happy to see that one of the existing examples from our repository is being used as a test case. |
I don't know the general use of wasm in audio worklets, but in my case there is likely to be other wasm binaries doing other things as well, but they may be independent from the audio worklet wasm and without a need to share memory. |
Another peculiarity of Audio Worklets is that performance.now() is undefined in the AudioWorkletGlobalScope, but when building for the SHELL environment emscripten is implementing some functionality using performance.now(), so the user will have to apply a workaround (polyfill or changing the code to avoid calls to performance.now()). Adding an AUDIO_WORKLET build option that doesn't enforce WASM_WORKERS would be useful also to fix this issue. |
I have a use-case where I want to run a wasm module from within an audio worklet in a similar way to what's described here:
https://developer.chrome.com/blog/audio-worklet-design-pattern/#setting-up
this relies on the environment being detected as SHELL, which is a bit odd. See also #6230 (comment).
Now there's an AUDIO_WORKLET environment available, but this seems to be heavily tied to the Wasm Audio Worklets API and in turn Wasm Workers, and doesn't allow for the pattern described above. A problem with Wasm Audio Worklets is that they rely on shared memory, which may not always be available.
https://emscripten.org/docs/api_reference/wasm_audio_worklets.html#wasm-audio-worklets-api
It would be nice if it was possible to build for and specify the audio worklet environment without having to use Wasm Audio Worklets API, and not have to specify the SHELL environment which, I guess, may not be guaranteed to work indefinitely?
The text was updated successfully, but these errors were encountered: