Skip to content

Commit

Permalink
Ready for review
Browse files Browse the repository at this point in the history
Signed-off-by: Jean-Yves <[email protected]>
  • Loading branch information
docjyJ committed Dec 20, 2024
1 parent 0fdd137 commit a249069
Show file tree
Hide file tree
Showing 7 changed files with 29 additions and 34 deletions.
2 changes: 1 addition & 1 deletion compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ services:
# NEXTCLOUD_ADDITIONAL_APKS: imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
# NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS: imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
# NEXTCLOUD_ENABLE_DRI_DEVICE: true # This allows to enable the /dev/dri device in the Nextcloud container. ⚠️⚠️⚠️ Warning: this only works if the '/dev/dri' device is present on the host! If it should not exist on your host, don't set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
# NEXTCLOUD_NVIDIA_GPU_MODE: 'runtime' # 'runtime' or 'deploy': This allows to enable the [NVIDIA runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [GPU access](https://docs.docker.com/compose/gpu-support/) in the Nextcloud container. Make sure you follow the instructions before setting this value. If you're using WSL2 and want to use the NVIDIA runtime, please follow the instructions to [install the NVIDIA Container Toolkit meta-version in WSL](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2).
# NVIDIA_GPU_MODE: true # This allows to enable the [NVIDIA runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [GPU access](https://docs.docker.com/compose/gpu-support/) in the Nextcloud container. Make sure you follow the instructions before setting this value. If you're using WSL2 and want to use the NVIDIA runtime, please follow the instructions to [install the NVIDIA Container Toolkit meta-version in WSL](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2).
# NEXTCLOUD_KEEP_DISABLED_APPS: false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
# SKIP_DOMAIN_VALIDATION: false # This should only be set to true if things are correctly configured. See https://github.com/nextcloud/all-in-one?tab=readme-ov-file#how-to-skip-the-domain-validation
# TALK_PORT: 3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
Expand Down
1 change: 0 additions & 1 deletion php/public/index.php
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,6 @@
'nextcloud_memory_limit' => $configurationManager->GetNextcloudMemoryLimit(),
'is_dri_device_enabled' => $configurationManager->isDriDeviceEnabled(),
'is_nvidia_runtime_enabled' => $configurationManager->isNvidiaRuntimeEnabled(),
'is_nvidia_gpu_deploy_enabled' => $configurationManager->isNvidiaDeployEnabled(),
'is_talk_recording_enabled' => $configurationManager->isTalkRecordingEnabled(),
'is_docker_socket_proxy_enabled' => $configurationManager->isDockerSocketProxyEnabled(),
'is_whiteboard_enabled' => $configurationManager->isWhiteboardEnabled(),
Expand Down
5 changes: 4 additions & 1 deletion php/src/ContainerDefinitionFetcher.php
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,10 @@ private function GetDefinition(): array
$devices = $entry['devices'];
}

$enable_gpu = $entry['enable_nvidia_gpu'] === true;
$enable_gpu = false;
if (is_bool($entry['enable_nvidia_gpu'])) {
$enable_gpu = $entry['enable_nvidia_gpu'];
}

$capAdd = [];
if (isset($entry['cap_add'])) {
Expand Down
10 changes: 3 additions & 7 deletions php/src/Data/ConfigurationManager.php
Original file line number Diff line number Diff line change
Expand Up @@ -984,18 +984,14 @@ public function isDriDeviceEnabled() : bool {
}

private function GetEnabledGPUMode() : string {
$envVariableName = 'NEXTCLOUD_NVIDIA_GPU_MODE';
$configName = 'nextcloud_nvidia_gpu_mode';
$envVariableName = 'NVIDIA_GPU_MODE';
$configName = 'nvidia_gpu_mode';
$defaultValue = '';
return $this->GetEnvironmentalVariableOrConfig($envVariableName, $configName, $defaultValue);
}

public function isNvidiaRuntimeEnabled() : bool {
return $this->GetEnabledGPUMode() === 'runtime';
}

public function isNvidiaDeployEnabled() : bool {
return $this->GetEnabledGPUMode() === 'deploy';
return $this->GetEnabledGPUMode() === 'true';
}

private function GetKeepDisabledApps() : string {
Expand Down
21 changes: 9 additions & 12 deletions php/src/Docker/DockerActionManager.php
Original file line number Diff line number Diff line change
Expand Up @@ -491,18 +491,15 @@ public function CreateContainer(Container $container) : void {
$requestBody['HostConfig']['Devices'] = $devices;
}

if ($container->CanUseNidiaGpu()) {
if ($this->configurationManager->isNvidiaRuntimeEnabled()) {
$requestBody['HostConfig']['Runtime'] = 'nvidia';
} elseif ($this->configurationManager->isNvidiaDeployEnabled()) {
$requestBody['HostConfig']['DeviceRequests'] = [
[
"Driver" => "nvidia",
"Count" => 1,
"Capabilities" => [["gpu"]],
]
];
}
if ($container->CanUseNidiaGpu() && $this->configurationManager->isNvidiaRuntimeEnabled()) {
$requestBody['HostConfig']['Runtime'] = 'nvidia';
$requestBody['HostConfig']['DeviceRequests'] = [
[
"Driver" => "nvidia",
"Count" => 1,
"Capabilities" => [["gpu"]],
]
];
}

$shmSize = $container->GetShmSize();
Expand Down
14 changes: 7 additions & 7 deletions php/templates/includes/aio-config.twig
Original file line number Diff line number Diff line change
Expand Up @@ -29,16 +29,16 @@
<p>Nextcloud has a timeout of {{ nextcloud_max_time }} seconds configured (important for big file uploads). See the <a href="https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud">NEXTCLOUD_MAX_TIME documentation</a> on how to change this.</p>

<p>
{% if is_dri_device_enabled == true %}
The /dev/dri device is getting attached to the Nextcloud container.
{% if is_dri_device_enabled == true and is_nvidia_runtime_enabled == true %}
Hardware acceleration is enabled with the /dev/dri device and the Nvidia runtime.
{% elseif is_dri_device_enabled == true %}
Hardware acceleration is enabled with the /dev/dri device.
{% elseif is_nvidia_runtime_enabled == true %}
The Nvida runtime is used for the Nextcloud container.
{% elseif is_nvidia_gpu_deploy_enabled == true %}
The Nvidia device is getting attached to the Nextcloud container.
Hardware acceleration is enabled with the Nvidia runtime.
{% else %}
No GPU acceleration is enabled. It's recommended to enable hardware transcoding for better performance.
Hardware acceleration is not enabled. It's recommended to enable hardware transcoding for better performance.
{% endif %}
See the <a href="https://github.com/nextcloud/all-in-one#how-to-enable-gpu-acceleration-for-nextcloud">NEXTCLOUD_GPU_MODE documentation</a> on how to change this.</p>
See the <a href="https://github.com/nextcloud/all-in-one#how-to-enable-hardware-acceleration-for-nextcloud">hardware acceleration documentation</a> on how to change this.</p>

<p>For further documentation on AIO, refer to <strong><a href="https://github.com/nextcloud/all-in-one#nextcloud-all-in-one">this page</a></strong>. You can use the browser search [CTRL]+[F] to search through the documentation. Additional documentation can be found <strong><a href="https://github.com/nextcloud/all-in-one/discussions/categories/wiki">here</a></strong>.</p>
</details>
10 changes: 5 additions & 5 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -765,7 +765,7 @@ You can do so by adding `--env NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS="imagick exte
### What about the pdlib PHP extension for the facerecognition app?
The [facerecognition app](https://apps.nextcloud.com/apps/facerecognition) requires the pdlib PHP extension to be installed. Unfortunately, it is not available on PECL nor via PHP core, so there is no way to add this into AIO currently. However you can use [this community container](https://github.com/nextcloud/all-in-one/tree/main/community-containers/facerecognition) in order to run facerecognition.
### How to enable GPU acceleration for Nextcloud?
### How to enable hardware acceleration for Nextcloud?
Some container can use GPU acceleration to increase performance like [memories app](https://apps.nextcloud.com/apps/memories) allows to enable hardware transcoding for videos.
#### With open source drivers MESA for AMD, Intel and **new** drivers `Nouveau` for Nvidia
Expand All @@ -775,7 +775,9 @@ Some container can use GPU acceleration to increase performance like [memories a
A list of supported device can be fond in [MESA 3D documentation](https://docs.mesa3d.org/systems.html).
This methode use the [Direct Rendering Infrastructure](https://dri.freedesktop.org/wiki/) with the access to the `/dev/dri` device. In order to use that, you need to add `--env NEXTCLOUD_ENABLE_DRI_DEVICE=true` to the docker run command of the mastercontainer (but before the last line `nextcloud/all-in-one:latest`! If it was started already, you will need to stop the mastercontainer, remove it (no data will be lost) and recreate it using the docker run command that you initially used) which will mount the `/dev/dri` device into the container. There is now a community container which allows to easily add the transcoding container of Memories to AIO: https://github.com/nextcloud/all-in-one/tree/main/community-containers/memories
This methode use the [Direct Rendering Infrastructure](https://dri.freedesktop.org/wiki/) with the access to the `/dev/dri` device.

Check failure on line 778 in readme.md

View workflow job for this annotation

GitHub Actions / Check spelling

methode ==> method
In order to use that, you need to add `--env NEXTCLOUD_ENABLE_DRI_DEVICE=true` to the docker run command of the mastercontainer (but before the last line `nextcloud/all-in-one:latest`! If it was started already, you will need to stop the mastercontainer, remove it (no data will be lost) and recreate it using the docker run command that you initially used) which will mount the `/dev/dri` device into the container.
#### With proprietary drivers for Nvidia :warning: BETA
Expand All @@ -787,9 +789,7 @@ This methode use the [Direct Rendering Infrastructure](https://dri.freedesktop.o
This methode use the [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html) with the nvidia runtime.

Check failure on line 790 in readme.md

View workflow job for this annotation

GitHub Actions / Check spelling

methode ==> method
To enable it, use `--env NEXTCLOUD_NVIDIA_GPU_MODE=runtime` to enable runtime.
You can also use [docker resource allocation](https://docs.docker.com/compose/gpu-support/) with `--env NEXTCLOUD_NVIDIA_GPU_MODE=deploy` as mode.
In order to use that, you need to add `--env NVIDIA_GPU_MODE=true` to the docker run command of the mastercontainer (but before the last line `nextcloud/all-in-one:latest`! If it was started already, you will need to stop the mastercontainer, remove it (no data will be lost) and recreate it using the docker run command that you initially used) which will enable the nvidia runtime.
If you're using WSL2 and want to use the NVIDIA runtime, please follow the instructions to [install the NVIDIA Container Toolkit meta-version in WSL](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2).
Expand Down

0 comments on commit a249069

Please sign in to comment.