You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's pretty awkward to iteratively configure and manage a DNS-Sync deployment in a live cluster. The process involves updating configmaps, redeploying the pod, and inspecting its health directly. A Kubernetes operator pattern can fix this and also support multi-tenancy.
Problems to be addressed by Operator pattern
DnsSync faults best exposed through CrashLoopBackOff
Granular DnsSync status not visible outside of logs
a single provider not being available / invalid credential
a desired FQDN already being managed by someone else
a source has an invalid TTL for the provider
DnsSync only accepts one configuration per process
Dry-run just prints any changes to pod logs
Enabling changes is all-or-nothing
Could instead allow for one-off applies as needed
Features provided by Operator pattern
Independent DnsSync resources can be created to manage per-team configs, or split-horizon DNS
DnsSync resources can have extra table columns:
Health such as Ready, Pending, Degraded
Strategy such as FullSync, DryRun, Disabled
Refer to same-namespace Secrets to load provider api keys
Approve existing DryRun to apply without also authorizing future syncs, by copying a status field into the spec
Problems introduced by Operator pattern
Pain of changing existing CRD. Helm doesn't update CRDs, so we'd want to continue working with outdated CRD specs.
Relatively inflexible kubectl behavior which has led other projects to making their own CLI (cmctl, etc)
Supposed to have leader election in case of multiple running operator pods
Multiple DnsSync resources observing a cluster source should reuse the same Watcher stream
RBAC: To let DnsSync resources reference API-key secrets from their namespace, do we need read get access to every secret?
Lack of Namespace isolation: Even if the Operator is installed into one specific Namespace, the CRD must be cluster-level
Alternative Solutions
For zone isolation:
Accept multiple TOML config files mounted similarly to the existing config file
Enable targeting a subset of records to specific zones, e.g. annotation filtering
For status visibility:
Emit warnings as Kubernetes "Event" resources next to the source resources (Ingress, etc)
Publish overall status as plain text inside one ConfigMap (like GKE cluster autoscaler)
The text was updated successfully, but these errors were encountered:
It's pretty awkward to iteratively configure and manage a DNS-Sync deployment in a live cluster. The process involves updating configmaps, redeploying the pod, and inspecting its health directly. A Kubernetes operator pattern can fix this and also support multi-tenancy.
Problems to be addressed by Operator pattern
CrashLoopBackOff
Features provided by Operator pattern
Ready
,Pending
,Degraded
FullSync
,DryRun
,Disabled
Secrets
to load provider api keysDryRun
to apply without also authorizing future syncs, by copying a status field into the specProblems introduced by Operator pattern
kubectl
behavior which has led other projects to making their own CLI (cmctl
, etc)Alternative Solutions
For zone isolation:
For status visibility:
The text was updated successfully, but these errors were encountered: