You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Any particular reason to avoid a NIF in the first place, I did not benchmark but I think the ser/deser + packet_proc_loop will have a much higher overhead then a NIF call.
Also now you have rogue threads fighting for OS cores. In the NIF case, a process on an erlang scheduler would proc a V8 instance when appropriate.
Also managing the v8 contexts I assume is possible from erlang (and procing the js eventloop from erlang)? The google engineers did not make some ridiculous JS implementation that demands an event thread be given to solely it?
The text was updated successfully, but these errors were encountered:
The reason it's implemented as an OS process is that the facilities to exit the VM from an outside thread were not reliable/fast enough in earlier versions. This is no longer the case and implementing it as a NIF is on the roadmap. Thanks for the input and sorry for the belated reply!
Also, aren't long-running NIFs discouraged as they block the scheduler or has that been fixed/changed?
You can use the dirty schedulers for that now.
However-- I'd vote against running v8 directly as NIFs. It may bring the whole Beam VM down if something went wrong. Ports give safety against all kind of crashes in v8.
Sweet project!
Any particular reason to avoid a NIF in the first place, I did not benchmark but I think the ser/deser + packet_proc_loop will have a much higher overhead then a NIF call.
Also now you have rogue threads fighting for OS cores. In the NIF case, a process on an erlang scheduler would proc a V8 instance when appropriate.
Also managing the v8 contexts I assume is possible from erlang (and procing the js eventloop from erlang)? The google engineers did not make some ridiculous JS implementation that demands an event thread be given to solely it?
The text was updated successfully, but these errors were encountered: