-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(helm): update chart rook-ceph-cluster to v1.16.1 #117
Open
renovate
wants to merge
1
commit into
main
Choose a base branch
from
renovate/rook-ceph-cluster-1.x
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- kubernetes/main/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
+++ kubernetes/main/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
@@ -13,13 +13,13 @@
spec:
chart: rook-ceph-cluster
sourceRef:
kind: HelmRepository
name: rook-ceph
namespace: flux-system
- version: v1.14.2
+ version: v1.16.1
install:
remediation:
retries: 3
interval: 30m
uninstall:
keepHistory: false |
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.14.3
fix(helm): update chart rook-ceph-cluster to v1.14.4
May 17, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
May 17, 2024 02:18
cbb0b72
to
2292896
Compare
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
May 30, 2024 23:18
2292896
to
f2bc96c
Compare
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.14.4
fix(helm): update chart rook-ceph-cluster to v1.14.5
May 30, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
June 14, 2024 02:37
f2bc96c
to
98baea2
Compare
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.14.5
fix(helm): update chart rook-ceph-cluster to v1.14.6
Jun 14, 2024
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
@@ -6,13 +6,13 @@
namespace: rook-ceph
spec:
monitoring:
enabled: true
cephVersion:
allowUnsupported: false
- image: quay.io/ceph/ceph:v18.2.2
+ image: quay.io/ceph/ceph:v19.2.0
cleanupPolicy:
allowUninstallWithVolumes: false
confirmation: ''
sanitizeDisks:
dataSource: zero
iteration: 1
--- HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules
+++ HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules
@@ -261,13 +261,13 @@
severity: warning
type: ceph_default
- alert: CephDeviceFailurePredictionTooHigh
annotations:
description: The device health module has determined that devices predicted
to fail can not be remediated automatically, since too many OSDs would be
- removed from the cluster to ensure performance and availabililty. Prevent
+ removed from the cluster to ensure performance and availability. Prevent
data integrity issues by adding new OSDs so that data may be relocated.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#device-health-toomany
summary: Too many devices are predicted to fail, unable to resolve
expr: ceph_health_detail{name="DEVICE_HEALTH_TOOMANY"} == 1
for: 1m
labels:
@@ -504,13 +504,13 @@
expr: ceph_health_detail{name="PG_RECOVERY_FULL"} == 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.5
severity: critical
type: ceph_default
- - alert: CephPGUnavilableBlockingIO
+ - alert: CephPGUnavailableBlockingIO
annotations:
description: Data availability is reduced, impacting the cluster's ability
to service I/O. One or more placement groups (PGs) are in a state that blocks
I/O.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-availability
summary: PG is unavailable, blocking I/O
@@ -626,15 +626,15 @@
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.8.3
severity: warning
type: ceph_default
- alert: CephNodeNetworkBondDegraded
annotations:
- summary: Degraded Bond on Node {{ $labels.instance }}
description: Bond {{ $labels.master }} is degraded on Node {{ $labels.instance
}}.
+ summary: Degraded Bond on Node {{ $labels.instance }}
expr: |
node_bonding_slaves - node_bonding_active != 0
labels:
severity: warning
type: ceph_default
- alert: CephNodeDiskspaceWarning
@@ -662,12 +662,23 @@
> 0)) )
labels:
severity: warning
type: ceph_default
- name: pools
rules:
+ - alert: CephPoolGrowthWarning
+ annotations:
+ description: Pool '{{ $labels.name }}' will be full in less than 5 days assuming
+ the average fill-up rate of the past 48 hours.
+ summary: Pool growth rate may soon exceed capacity
+ expr: (predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id,
+ instance, pod) group_right() ceph_pool_metadata) >= 95
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.9.2
+ severity: warning
+ type: ceph_default
- alert: CephPoolBackfillFull
annotations:
description: A pool is approaching the near full threshold, which will prevent
recovery/backfill operations from completing. Consider adding more capacity.
summary: Free space in a pool is too low for recovery/backfill
expr: ceph_health_detail{name="POOL_BACKFILLFULL"} > 0
@@ -718,22 +729,113 @@
expr: ceph_healthcheck_slow_ops > 0
for: 30s
labels:
severity: warning
type: ceph_default
- alert: CephDaemonSlowOps
- for: 30s
- expr: ceph_daemon_health_metrics{type="SLOW_OPS"} > 0
- labels:
- severity: warning
- type: ceph_default
- annotations:
- summary: '{{ $labels.ceph_daemon }} operations are slow to complete'
+ annotations:
description: '{{ $labels.ceph_daemon }} operations are taking too long to
process (complaint time exceeded)'
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#slow-ops
+ summary: '{{ $labels.ceph_daemon }} operations are slow to complete'
+ expr: ceph_daemon_health_metrics{type="SLOW_OPS"} > 0
+ for: 30s
+ labels:
+ severity: warning
+ type: ceph_default
+ - name: hardware
+ rules:
+ - alert: HardwareStorageError
+ annotations:
+ description: Some storage devices are in error. Check `ceph health detail`.
+ summary: Storage devices error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_STORAGE"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.1
+ severity: critical
+ type: ceph_default
+ - alert: HardwareMemoryError
+ annotations:
+ description: DIMM error(s) detected. Check `ceph health detail`.
+ summary: DIMM error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_MEMORY"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.2
+ severity: critical
+ type: ceph_default
+ - alert: HardwareProcessorError
+ annotations:
+ description: Processor error(s) detected. Check `ceph health detail`.
+ summary: Processor error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_PROCESSOR"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.3
+ severity: critical
+ type: ceph_default
+ - alert: HardwareNetworkError
+ annotations:
+ description: Network error(s) detected. Check `ceph health detail`.
+ summary: Network error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_NETWORK"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.4
+ severity: critical
+ type: ceph_default
+ - alert: HardwarePowerError
+ annotations:
+ description: Power supply error(s) detected. Check `ceph health detail`.
+ summary: Power supply error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_POWER"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.5
+ severity: critical
+ type: ceph_default
+ - alert: HardwareFanError
+ annotations:
+ description: Fan error(s) detected. Check `ceph health detail`.
+ summary: Fan error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_FANS"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.6
+ severity: critical
+ type: ceph_default
+ - name: PrometheusServer
+ rules:
+ - alert: PrometheusJobMissing
+ annotations:
+ description: The prometheus job that scrapes from Ceph MGR is no longer defined,
+ this will effectively mean you'll have no metrics or alerts for the cluster. Please
+ review the job definitions in the prometheus.yml file of the prometheus
+ instance.
+ summary: The scrape job for Ceph MGR is missing from Prometheus
+ expr: absent(up{job="rook-ceph-mgr"})
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.12.1
+ severity: critical
+ type: ceph_default
+ - alert: PrometheusJobExporterMissing
+ annotations:
+ description: The prometheus job that scrapes from Ceph Exporter is no longer
+ defined, this will effectively mean you'll have no metrics or alerts for
+ the cluster. Please review the job definitions in the prometheus.yml file
+ of the prometheus instance.
+ summary: The scrape job for Ceph Exporter is missing from Prometheus
+ expr: sum(absent(up{job="rook-ceph-exporter"})) and sum(ceph_osd_metadata{ceph_version=~"^ceph
+ version (1[89]|[2-9][0-9]).*"}) > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.12.1
+ severity: critical
+ type: ceph_default
- name: rados
rules:
- alert: CephObjectMissing
annotations:
description: The latest version of a RADOS object can not be found, even though
all OSDs are up. I/O requests for this object from clients will block (hang).
@@ -760,7 +862,218 @@
expr: ceph_health_detail{name="RECENT_CRASH"} == 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.1.2
severity: critical
type: ceph_default
+ - name: rbdmirror
+ rules:
+ - alert: CephRBDMirrorImagesPerDaemonHigh
+ annotations:
+ description: Number of image replications per daemon is not supposed to go
+ beyond threshold 100
+ summary: Number of image replications are now above 100
+ expr: sum by (ceph_daemon, namespace) (ceph_rbd_mirror_snapshot_image_snapshots)
+ > 100
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.10.2
+ severity: critical
+ type: ceph_default
+ - alert: CephRBDMirrorImagesNotInSync
+ annotations:
+ description: Both local and remote RBD mirror images should be in sync.
+ summary: Some of the RBD mirror images are not in sync with the remote counter
+ parts.
+ expr: sum by (ceph_daemon, image, namespace, pool) (topk by (ceph_daemon, image,
+ namespace, pool) (1, ceph_rbd_mirror_snapshot_image_local_timestamp) - topk
+ by (ceph_daemon, image, namespace, pool) (1, ceph_rbd_mirror_snapshot_image_remote_timestamp))
+ != 0
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.10.3
+ severity: critical
+ type: ceph_default
+ - alert: CephRBDMirrorImagesNotInSyncVeryHigh
+ annotations:
+ description: More than 10% of the images have synchronization problems
+ summary: Number of unsynchronized images are very high.
+ expr: count by (ceph_daemon) ((topk by (ceph_daemon, image, namespace, pool)
+ (1, ceph_rbd_mirror_snapshot_image_local_timestamp) - topk by (ceph_daemon,
+ image, namespace, pool) (1, ceph_rbd_mirror_snapshot_image_remote_timestamp))
+ != 0) > (sum by (ceph_daemon) (ceph_rbd_mirror_snapshot_snapshots)*.1)
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.10.4
+ severity: critical
+ type: ceph_default
+ - alert: CephRBDMirrorImageTransferBandwidthHigh
+ annotations:
[Diff truncated by flux-local] |
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.14.6
fix(helm): update chart rook-ceph-cluster to v1.14.7
Jun 21, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
June 21, 2024 19:20
98baea2
to
9864c3f
Compare
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.14.7
fix(helm): update chart rook-ceph-cluster to v1.14.8
Jul 5, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
July 5, 2024 23:43
9864c3f
to
b970b98
Compare
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
July 25, 2024 21:49
b970b98
to
0666354
Compare
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.14.8
fix(helm): update chart rook-ceph-cluster to v1.14.9
Jul 25, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
August 21, 2024 02:08
0666354
to
2f440e4
Compare
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.14.9
feat(helm): update chart rook-ceph-cluster to v1.15.0
Aug 21, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
September 4, 2024 22:44
2f440e4
to
3df2974
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.0
feat(helm): update chart rook-ceph-cluster to v1.15.1
Sep 4, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
September 19, 2024 21:09
3df2974
to
c2e8ad1
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.1
feat(helm): update chart rook-ceph-cluster to v1.15.2
Sep 19, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
October 3, 2024 21:51
c2e8ad1
to
dd72339
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.2
feat(helm): update chart rook-ceph-cluster to v1.15.3
Oct 3, 2024
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.3
feat(helm): update chart rook-ceph-cluster to v1.15.4
Oct 18, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
October 18, 2024 00:12
dd72339
to
fc52c7e
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.4
feat(helm): update chart rook-ceph-cluster to v1.15.5
Nov 6, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
November 6, 2024 22:59
fc52c7e
to
2b0bf5a
Compare
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
November 21, 2024 23:28
2b0bf5a
to
2606816
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.5
feat(helm): update chart rook-ceph-cluster to v1.15.6
Nov 21, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
December 17, 2024 23:05
2606816
to
78e5b85
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.6
feat(helm): update chart rook-ceph-cluster to v1.16.0
Dec 17, 2024
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.16.0
feat(helm): update chart rook-ceph-cluster to v1.16.1
Jan 7, 2025
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
January 7, 2025 15:21
78e5b85
to
d9cc53a
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.14.2
->v1.16.1
Release Notes
rook/rook (rook-ceph-cluster)
v1.16.1
Compare Source
Improvements
Rook v1.16.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.16.0
Compare Source
Upgrade Guide
To upgrade from previous versions of Rook, see the Rook upgrade guide.
Breaking Changes
Features
statusCheck
is enabled on the parent CephBlockPool.additionalConfig.bucketPolicy
field (see #15138).opsLogSidecar
in the gateway settings.v1.15.7
Compare Source
Improvements
Rook v1.15.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.6
Compare Source
Improvements
Rook v1.15.6 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.5
Compare Source
Improvements
Rook v1.15.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
/run/udev
in the init container for ceph-volume activate (#14901, @guits)v1.15.4
Compare Source
Improvements
Rook v1.15.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.3
Compare Source
Improvements
Rook v1.15.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.2
Compare Source
Improvements
Rook v1.15.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.1
Compare Source
Improvements
Rook v1.15.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
mon.zones
spec (#14636, @BenoitKnecht)v1.15.0
Compare Source
Upgrade Guide
To upgrade from previous versions of Rook, see the Rook upgrade guide.
Breaking Changes
csi-*plugin-holder-*
in the Rook operator namespace, see the detailed documentation to disable them. This deprecation process will be required before upgrading to the future Rook v1.16.spec.hosting
configurations are set. Use the newspec.hosting.advertiseEndpoint
config to define required behavior as documented.Features
allowDeviceClassUpdate: true
is set in the CephCluster CR.allowOsdCrushWeightUpdate: true
is set in the CephCluster CR.docker.io/rook/ceph
) in operator manifests and helm charts.Experimental Features
operator.yaml
.v1.14.12
Compare Source
Improvements
Rook v1.14.12 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.11
Compare Source
Improvements
Rook v1.14.11 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.10
Compare Source
Improvements
Rook v1.14.10 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.9
Compare Source
Improvements
Rook v1.14.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.8
Compare Source
Improvements
Rook v1.14.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.7
Compare Source
What's Changed
monitoring: fix CephPoolGrowthWarning expression (#14346, @matofeder)
monitoring: Set honor labels on the service monitor (#14339, @travisn)
Full Changelog: rook/rook@v1.14.6...v1.14.7
v1.14.6
Compare Source
What's Changed
v1.14.5
Compare Source
Improvements
Rook v1.14.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.4
Compare Source
Improvements
Rook v1.14.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.3
Compare Source
Improvements
Rook v1.14.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.