Skip to content

Commit

Permalink
Deployed 13495aea5 to latest in docs/rook with MkDocs 1.6.1 and mike …
Browse files Browse the repository at this point in the history
…2.1.3
  • Loading branch information
Rook committed Nov 4, 2024
1 parent 8a7844a commit 20b3bdb
Show file tree
Hide file tree
Showing 4 changed files with 85 additions and 85 deletions.
2 changes: 1 addition & 1 deletion docs/rook/latest/CRDs/Cluster/ceph-cluster-crd/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@
<a id=__codelineno-1-36 name=__codelineno-1-36></a><span class=w> </span><span class="p p-Indicator">-</span><span class=w> </span><span class=nt>effect</span><span class=p>:</span><span class=w> </span><span class="l l-Scalar l-Scalar-Plain">NoSchedule</span>
<a id=__codelineno-1-37 name=__codelineno-1-37></a><span class=w> </span><span class=nt>key</span><span class=p>:</span><span class=w> </span><span class="l l-Scalar l-Scalar-Plain">node-role.kubernetes.io/control-plane</span>
<a id=__codelineno-1-38 name=__codelineno-1-38></a><span class=w> </span><span class=nt>operator</span><span class=p>:</span><span class=w> </span><span class="l l-Scalar l-Scalar-Plain">Exists</span>
</code></pre></div></td></tr></table></div> <h3 id=cluster-wide-resources-configuration-settings>Cluster-wide Resources Configuration Settings<a class=headerlink href=#cluster-wide-resources-configuration-settings title="Permanent link">&para;</a></h3> <p>Resources should be specified so that the Rook components are handled after <a href=https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ >Kubernetes Pod Quality of Service classes</a>. This allows to keep Rook components running when for example a node runs out of memory and the Rook components are not killed depending on their Quality of Service class.</p> <p>You can set resource requests/limits for Rook components through the <a href=#resource-requirementslimits>Resource Requirements/Limits</a> structure in the following keys:</p> <ul> <li><code>mon</code>: Set resource requests/limits for mons</li> <li><code>osd</code>: Set resource requests/limits for OSDs. This key applies for all OSDs regardless of their device classes. In case of need to apply resource requests/limits for OSDs with particular device class use specific osd keys below. If the memory resource is declared Rook will automatically set the OSD configuration <code>osd_memory_target</code> to the same value. This aims to ensure that the actual OSD memory consumption is consistent with the OSD pods' resource declaration.</li> <li><code>osd-&lt;deviceClass&gt;</code>: Set resource requests/limits for OSDs on a specific device class. Rook will automatically detect <code>hdd</code>, <code>ssd</code>, or <code>nvme</code> device classes. Custom device classes can also be set.</li> <li><code>mgr</code>: Set resource requests/limits for MGRs</li> <li><code>mgr-sidecar</code>: Set resource requests/limits for the MGR sidecar, which is only created when <code>mgr.count: 2</code>. The sidecar requires very few resources since it only executes every 15 seconds to query Ceph for the active mgr and update the mgr services if the active mgr changed.</li> <li><code>prepareosd</code>: Set resource requests/limits for OSD prepare job</li> <li><code>crashcollector</code>: Set resource requests/limits for crash. This pod runs wherever there is a Ceph pod running. It scrapes for Ceph daemon core dumps and sends them to the Ceph manager crash module so that core dumps are centralized and can be easily listed/accessed. You can read more about the <a href=https://docs.ceph.com/docs/master/mgr/crash/ >Ceph Crash module</a>.</li> <li><code>logcollector</code>: Set resource requests/limits for the log collector. When enabled, this container runs as side-car to each Ceph daemons.</li> <li><code>cleanup</code>: Set resource requests/limits for cleanup job, responsible for wiping cluster's data after uninstall</li> <li><code>exporter</code>: Set resource requests/limits for Ceph exporter.</li> </ul> <p>In order to provide the best possible experience running Ceph in containers, Rook internally recommends minimum memory limits if resource limits are passed. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log.</p> <ul> <li><code>mon</code>: 1024MB</li> <li><code>mgr</code>: 512MB</li> <li><code>osd</code>: 2048MB</li> <li><code>crashcollector</code>: 60MB</li> <li><code>mgr-sidecar</code>: 100MB limit, 40MB requests</li> <li><code>prepareosd</code>: no limits (see the note)</li> <li><code>exporter</code>: 128MB limit, 50MB requests</li> </ul> <div class="admonition note"> <p class=admonition-title>Note</p> <p>We recommend not setting memory limits on the OSD prepare job to prevent OSD provisioning failure due to memory constraints. The OSD prepare job bursts memory usage during the OSD provisioning depending on the size of the device, typically 1-2Gi for large disks. The OSD prepare job only bursts a single time per OSD. All future runs of the OSD prepare job will detect the OSD is already provisioned and skip the provisioning.</p> </div> <div class="admonition hint"> <p class=admonition-title>Hint</p> <p>The resources for MDS daemons are not configured in the Cluster. Refer to the <a href=../../Shared-Filesystem/ceph-filesystem-crd/ >Ceph Filesystem CRD</a> instead.</p> </div> <h3 id=resource-requirementslimits>Resource Requirements/Limits<a class=headerlink href=#resource-requirementslimits title="Permanent link">&para;</a></h3> <p>For more information on resource requests/limits see the official Kubernetes documentation: <a href=https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container>Kubernetes - Managing Compute Resources for Containers</a></p> <ul> <li><code>requests</code>: Requests for cpu or memory.<ul> <li><code>cpu</code>: Request for CPU (example: one CPU core <code>1</code>, 50% of one CPU core <code>500m</code>).</li> <li><code>memory</code>: Limit for Memory (example: one gigabyte of memory <code>1Gi</code>, half a gigabyte of memory <code>512Mi</code>).</li> </ul> </li> <li><code>limits</code>: Limits for cpu or memory.<ul> <li><code>cpu</code>: Limit for CPU (example: one CPU core <code>1</code>, 50% of one CPU core <code>500m</code>).</li> <li><code>memory</code>: Limit for Memory (example: one gigabyte of memory <code>1Gi</code>, half a gigabyte of memory <code>512Mi</code>).</li> </ul> </li> </ul> <div class="admonition warning"> <p class=admonition-title>Warning</p> <p>Before setting resource requests/limits, please take a look at the Ceph documentation for recommendations for each component: <a href=http://docs.ceph.com/docs/master/start/hardware-recommendations/ >Ceph - Hardware Recommendations</a>.</p> </div> <h4 id=node-specific-resources-for-osds>Node Specific Resources for OSDs<a class=headerlink href=#node-specific-resources-for-osds title="Permanent link">&para;</a></h4> <p>This example shows that you can override these requests/limits for OSDs per node when using <code>useAllNodes: false</code> in the <code>node</code> item in the <code>nodes</code> list.</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-2-1> 1</a></span>
</code></pre></div></td></tr></table></div> <h3 id=cluster-wide-resources-configuration-settings>Cluster-wide Resources Configuration Settings<a class=headerlink href=#cluster-wide-resources-configuration-settings title="Permanent link">&para;</a></h3> <p>Resources should be specified so that the Rook components are handled after <a href=https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ >Kubernetes Pod Quality of Service classes</a>. This allows to keep Rook components running when for example a node runs out of memory and the Rook components are not killed depending on their Quality of Service class.</p> <p>You can set resource requests/limits for Rook components through the <a href=#resource-requirementslimits>Resource Requirements/Limits</a> structure in the following keys:</p> <ul> <li><code>mon</code>: Set resource requests/limits for mons</li> <li><code>osd</code>: Set resource requests/limits for OSDs. This key applies for all OSDs regardless of their device classes. In case of need to apply resource requests/limits for OSDs with particular device class use specific osd keys below. If the memory resource is declared Rook will automatically set the OSD configuration <code>osd_memory_target</code> to the same value. This aims to ensure that the actual OSD memory consumption is consistent with the OSD pods' resource declaration.</li> <li><code>osd-&lt;deviceClass&gt;</code>: Set resource requests/limits for OSDs on a specific device class. Rook will automatically detect <code>hdd</code>, <code>ssd</code>, or <code>nvme</code> device classes. Custom device classes can also be set.</li> <li><code>mgr</code>: Set resource requests/limits for MGRs</li> <li><code>mgr-sidecar</code>: Set resource requests/limits for the MGR sidecar, which is only created when <code>mgr.count: 2</code>. The sidecar requires very few resources since it only executes every 15 seconds to query Ceph for the active mgr and update the mgr services if the active mgr changed.</li> <li><code>prepareosd</code>: Set resource requests/limits for OSD prepare job</li> <li><code>crashcollector</code>: Set resource requests/limits for crash. This pod runs wherever there is a Ceph pod running. It scrapes for Ceph daemon core dumps and sends them to the Ceph manager crash module so that core dumps are centralized and can be easily listed/accessed. You can read more about the <a href=https://docs.ceph.com/docs/master/mgr/crash/ >Ceph Crash module</a>.</li> <li><code>logcollector</code>: Set resource requests/limits for the log collector. When enabled, this container runs as side-car to each Ceph daemons.</li> <li><code>cmd-reporter</code>: Set resource requests/limits for the jobs that detect the ceph version and collect network info.</li> <li><code>cleanup</code>: Set resource requests/limits for cleanup job, responsible for wiping cluster's data after uninstall</li> <li><code>exporter</code>: Set resource requests/limits for Ceph exporter.</li> </ul> <p>In order to provide the best possible experience running Ceph in containers, Rook internally recommends minimum memory limits if resource limits are passed. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log.</p> <ul> <li><code>mon</code>: 1024MB</li> <li><code>mgr</code>: 512MB</li> <li><code>osd</code>: 2048MB</li> <li><code>crashcollector</code>: 60MB</li> <li><code>mgr-sidecar</code>: 100MB limit, 40MB requests</li> <li><code>prepareosd</code>: no limits (see the note)</li> <li><code>exporter</code>: 128MB limit, 50MB requests</li> </ul> <div class="admonition note"> <p class=admonition-title>Note</p> <p>We recommend not setting memory limits on the OSD prepare job to prevent OSD provisioning failure due to memory constraints. The OSD prepare job bursts memory usage during the OSD provisioning depending on the size of the device, typically 1-2Gi for large disks. The OSD prepare job only bursts a single time per OSD. All future runs of the OSD prepare job will detect the OSD is already provisioned and skip the provisioning.</p> </div> <div class="admonition hint"> <p class=admonition-title>Hint</p> <p>The resources for MDS daemons are not configured in the Cluster. Refer to the <a href=../../Shared-Filesystem/ceph-filesystem-crd/ >Ceph Filesystem CRD</a> instead.</p> </div> <h3 id=resource-requirementslimits>Resource Requirements/Limits<a class=headerlink href=#resource-requirementslimits title="Permanent link">&para;</a></h3> <p>For more information on resource requests/limits see the official Kubernetes documentation: <a href=https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container>Kubernetes - Managing Compute Resources for Containers</a></p> <ul> <li><code>requests</code>: Requests for cpu or memory.<ul> <li><code>cpu</code>: Request for CPU (example: one CPU core <code>1</code>, 50% of one CPU core <code>500m</code>).</li> <li><code>memory</code>: Limit for Memory (example: one gigabyte of memory <code>1Gi</code>, half a gigabyte of memory <code>512Mi</code>).</li> </ul> </li> <li><code>limits</code>: Limits for cpu or memory.<ul> <li><code>cpu</code>: Limit for CPU (example: one CPU core <code>1</code>, 50% of one CPU core <code>500m</code>).</li> <li><code>memory</code>: Limit for Memory (example: one gigabyte of memory <code>1Gi</code>, half a gigabyte of memory <code>512Mi</code>).</li> </ul> </li> </ul> <div class="admonition warning"> <p class=admonition-title>Warning</p> <p>Before setting resource requests/limits, please take a look at the Ceph documentation for recommendations for each component: <a href=http://docs.ceph.com/docs/master/start/hardware-recommendations/ >Ceph - Hardware Recommendations</a>.</p> </div> <h4 id=node-specific-resources-for-osds>Node Specific Resources for OSDs<a class=headerlink href=#node-specific-resources-for-osds title="Permanent link">&para;</a></h4> <p>This example shows that you can override these requests/limits for OSDs per node when using <code>useAllNodes: false</code> in the <code>node</code> item in the <code>nodes</code> list.</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-2-1> 1</a></span>
<span class=normal><a href=#__codelineno-2-2> 2</a></span>
<span class=normal><a href=#__codelineno-2-3> 3</a></span>
<span class=normal><a href=#__codelineno-2-4> 4</a></span>
Expand Down
2 changes: 1 addition & 1 deletion docs/rook/latest/search/search_index.json

Large diffs are not rendered by default.

Loading

0 comments on commit 20b3bdb

Please sign in to comment.