You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This has been a blue-sky feature for sdl_exp for a while, recently Paul has suggested we propose a small project for an undergraduate to work towards it,
Users without a GPU may be running their simulations on remote headless (Linux) machines (probably HPC). We also run tutorials/worshops, with a cloud backend (e.g. controlled via a Jupyter notebook). It would be beneficial for these users if there was a way for them to visualise their models in real-time.
There are 3 different levels which this could be completed, and it would make sense to investigate and apply them in the same order.
Remote Video: Rendering a visualisation on a headless machine directly to a video file.
Streaming Video: Streaming a visualisation so that it can be received in real-time on a different machine.
Remote Visualisation: Streaming visualisation, with the ability to interact (e.g. move the camera).
Remote Video
Directly pass framebuffers to NVEnc to encode them using h265(?) and store this video stream to file.
It's likely an additional step will be required to mux the raw video stream into a suitable container. This could be as simple as running it through ffmpeg(LGPL)/mkvmerge(GPL) to produce an .mp4/.mkv. It's not clear whether we would need to automatically mux the h265s, as we might leave this stage as a form of debugging (VLC can play raw .h265 files?).
It's worth noting, that I previously investigated use of NVEnc and found that it's OpenGL integration is only supported on Linux, so this feature would only be available to Linux builds of flamegpu2 visualiser, unless use of cuda-opengl interop enabled use of the cross platform cuda-nvenc support.
Streaming Video
The next challenge is to stream the video in realtime, there are a couple of options for this.
Integrate with a dedicated RTMP(GPL), HLS, WEBRTC(1|2|3) streaming server library
Integrate with OBS project (GPL)?
There may be other options I haven't considered.
Initially, streaming to twitch or youtube should suffice as a proof of concept, but a final solution would be better with a dedicated client or self hosted webpage to automate the connection to some degree.
For the special case of streaming from HPC, this may provide some useful info: https://rse.shef.ac.uk/blog/2019-01-31-ssh-forwarding/ Due to the dependencies required, it would likely need to be packaged into a (singularity) container.
Streaming Visualisation
Providing control of the visualisation remotely requires the client to send back data to the flamegpu2 visualisation. The implementation of this will depend on how the video is being received. Supporting this is a rather low priority, as FLAMEGPU2 does allow a user to specify the initial camera configuration via the visualisation config.
The text was updated successfully, but these errors were encountered:
Relevant links Pete/Paul found related to headless rendering, starting to seem like it might not require as many changes to the code as initially presumed.
Can confirm that the offscreen video init target is not available in the libsdl2 package from the ubuntu 20.04, so will need to build at configure time (maybe only iff offscreen support is desired via our cmake? (and/or sdl2 could not be found) Not sure how long libsdl2 takes).
I.e. by inserting the following into the top of Visualiser::init()
if (SDL_VideoInit("offscreen") < 0) {
SDL_Log("Couldn't initialize the offscreen video driver: %s\n", SDL_GetError());
return SDL_FALSE;
}
which outputs:
INFO: Couldn't initialize the offscreen video driver: offscreen not available
note the snippet above is taken from the test defined in the SDL commit referenced above.
This has been a blue-sky feature for sdl_exp for a while, recently Paul has suggested we propose a small project for an undergraduate to work towards it,
Users without a GPU may be running their simulations on remote headless (Linux) machines (probably HPC). We also run tutorials/worshops, with a cloud backend (e.g. controlled via a Jupyter notebook). It would be beneficial for these users if there was a way for them to visualise their models in real-time.
There are 3 different levels which this could be completed, and it would make sense to investigate and apply them in the same order.
Remote Video
Directly pass framebuffers to NVEnc to encode them using h265(?) and store this video stream to file.
It's likely an additional step will be required to mux the raw video stream into a suitable container. This could be as simple as running it through ffmpeg(LGPL)/mkvmerge(GPL) to produce an
.mp4
/.mkv
. It's not clear whether we would need to automatically mux the h265s, as we might leave this stage as a form of debugging (VLC can play raw .h265 files?).It's worth noting, that I previously investigated use of
NVEnc
and found that it's OpenGL integration is only supported on Linux, so this feature would only be available to Linux builds of flamegpu2 visualiser, unless use of cuda-opengl interop enabled use of the cross platform cuda-nvenc support.Streaming Video
The next challenge is to stream the video in realtime, there are a couple of options for this.
There may be other options I haven't considered.
Initially, streaming to twitch or youtube should suffice as a proof of concept, but a final solution would be better with a dedicated client or self hosted webpage to automate the connection to some degree.
For the special case of streaming from HPC, this may provide some useful info: https://rse.shef.ac.uk/blog/2019-01-31-ssh-forwarding/ Due to the dependencies required, it would likely need to be packaged into a (singularity) container.
Streaming Visualisation
Providing control of the visualisation remotely requires the client to send back data to the flamegpu2 visualisation. The implementation of this will depend on how the video is being received. Supporting this is a rather low priority, as FLAMEGPU2 does allow a user to specify the initial camera configuration via the visualisation config.
The text was updated successfully, but these errors were encountered: