Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DBAAS-1193: workaround to the ack-rds-controller upgrade issue #343

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion api/v1beta1/webhook_suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,6 @@ var _ = BeforeSuite(func() {
ClientDisableCacheFor: []client.Object{
&operatorframework.ClusterServiceVersion{},
&corev1.Secret{},
&corev1.ConfigMap{},
},
})
Expect(err).NotTo(HaveOccurred())
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -415,14 +415,6 @@ spec:
spec:
clusterPermissions:
- rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
verbs:
- create
- get
- apiGroups:
- ""
resources:
Expand Down Expand Up @@ -607,6 +599,7 @@ spec:
verbs:
- delete
- get
- list
- update
- apiGroups:
- operators.coreos.com
Expand Down
4 changes: 0 additions & 4 deletions bundle/metadata/dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,3 @@ dependencies:
value:
packageName: rh-service-binding-operator
version: ">=1.0.0"
- type: olm.package
value:
packageName: ack-rds-controller
version: ">=0.0.27 <=0.1.3"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this intentional removed ? if yes how the RDS controller will be installed ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be installed as a dependency of the rds-dbaa-operator.

Copy link
Contributor

@tchughesiv tchughesiv Feb 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't good enough... in my prior testing its required in both places. If nothing else, dbaas-operator at the very least is where this dep needs to be. This is because during a RHODA upgrade, if ack-rds-controller has since release a new version... say 0.2.0, ACK will upgrade without this dep declared. This will then break the rhoda install as rds would have a dep conflict

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should not be removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The auto-upgrade of the ack-rds-controller is broken. Since the upgrade of the ack-rds-controller is not maintained, it is not necessary to allow it to upgrade, which causes various issues.
Changes in this PR will delete any ack-rds-controller as soon as the dbaas-operator is installed in the namespace, upgrade of the ack-rds-controller will not take place during RHODA upgrade.
When the rds-dbaas-operator is re-installed, the properly version of the ack-rds-controller will be installed as a dependency.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that doesn't help in clusters where ack is installed in another ns

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the benefit of using olm deps

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the ack-rds-controller is installed in another namespace, rhoda will not work. Because the secret and configmap of the ack-rds-controller must be set by the rhoda operator in the rhoda installation namespace in order to create a provider account, then rhoda will scale up the deployment of the ack-rds-controller in the rhoda installation namespace.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but, wouldn't anyone who's installed ack in another namespace have configured it already? why would we care if it's in same namespace?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The configuration information of the ack-rds-controller should be provided via rhoda provider account creation. Only one provider account can be created for RDS in a cluster, and the credentials of the provider account is used to set up the ack-rds-controller.
If users configure and install the operator in another namespace without using rhoda, then it is supposed to work independent of rhoda.

9 changes: 1 addition & 8 deletions config/rbac/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,6 @@ metadata:
creationTimestamp: null
name: manager-role
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
verbs:
- create
- get
- apiGroups:
- ""
resources:
Expand Down Expand Up @@ -198,6 +190,7 @@ rules:
verbs:
- delete
- get
- list
- update
- apiGroups:
- operators.coreos.com
Expand Down
104 changes: 32 additions & 72 deletions controllers/dbaasplatform_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,6 @@ import (
"github.com/operator-framework/api/pkg/operators/v1alpha1"

admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
apimeta "k8s.io/apimachinery/pkg/api/meta"
Expand Down Expand Up @@ -88,7 +86,7 @@ type DBaaSPlatformReconciler struct {
//+kubebuilder:rbac:groups=dbaas.redhat.com,resources=*/finalizers,verbs=update
//+kubebuilder:rbac:groups=operators.coreos.com,resources=catalogsources;operatorgroups,verbs=get;list;create;update;watch
//+kubebuilder:rbac:groups=operators.coreos.com,resources=subscriptions,verbs=get;list;create;update;watch;delete
//+kubebuilder:rbac:groups=operators.coreos.com,resources=clusterserviceversions,verbs=get;update;delete
//+kubebuilder:rbac:groups=operators.coreos.com,resources=clusterserviceversions,verbs=get;update;delete;list
tchughesiv marked this conversation as resolved.
Show resolved Hide resolved
//+kubebuilder:rbac:groups=operators.coreos.com,resources=clusterserviceversions/finalizers,verbs=update
//+kubebuilder:rbac:groups=apps,resources=deployments;daemonsets;statefulsets,verbs=get;list;create;update;watch;delete
//+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;create;update;watch;delete
Expand All @@ -100,7 +98,6 @@ type DBaaSPlatformReconciler struct {
//+kubebuilder:rbac:groups=config.openshift.io,resources=consoles,verbs=get;list;watch
//+kubebuilder:rbac:groups=apiextensions.k8s.io,resources=customresourcedefinitions,verbs=get;list;watch
//+kubebuilder:rbac:groups=admissionregistration.k8s.io,resources=validatingwebhookconfigurations,verbs=get;list;watch;delete
//+kubebuilder:rbac:groups="",resources=secrets;configmaps,verbs=get;create

// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
Expand Down Expand Up @@ -133,11 +130,6 @@ func (r *DBaaSPlatformReconciler) Reconcile(ctx context.Context, req ctrl.Reques
logger.Error(err, "Error related to conversion webhook setup")
}

// temporary fix for ack-rds-controller v0.1.3 upgrade issue
if err = r.fixRDSControllerUpgrade(ctx); err != nil {
logger.Error(err, "Error related to ack-rds-controller v0.1.3 upgrade")
}

if cr.DeletionTimestamp != nil {
event = metrics.LabelEventValueDelete
} else {
Expand Down Expand Up @@ -442,84 +434,52 @@ func (r *DBaaSPlatformReconciler) fixConversionWebhooks(ctx context.Context) err
return nil
}

const ackRDSVersion = "1.0.0"

// Temporary solution to the rds-controller upgrade issue, will revert in the next release
func (r *DBaaSPlatformReconciler) prepareRDSController(ctx context.Context, cli k8sclient.Client) error {
secret := &v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-user-secrets", //#nosec G101
Namespace: r.DBaaSReconciler.InstallNamespace,
},
csvName := fmt.Sprintf("ack-rds-controller.v%s", ackRDSVersion)
subscriptionList := &v1alpha1.SubscriptionList{}
if err := cli.List(ctx, subscriptionList, k8sclient.InNamespace(r.InstallNamespace)); err != nil {
return err
}
if err := cli.Get(ctx, k8sclient.ObjectKeyFromObject(secret), secret); err != nil {
if apierrors.IsNotFound(err) {
secret.Data = map[string][]byte{
"AWS_ACCESS_KEY_ID": []byte("dummy"),
"AWS_SECRET_ACCESS_KEY": []byte("dummy"), //#nosec G101
for i := range subscriptionList.Items {
subscription := subscriptionList.Items[i]
if subscription.Spec != nil && subscription.Spec.Package == "ack-rds-controller" {
if subscription.Status.CurrentCSV == csvName ||
subscription.Status.InstalledCSV == csvName {
continue
}
if err := cli.Create(ctx, secret); err != nil {
if err := cli.Delete(ctx, &subscription); err != nil {
return err
}
} else {
return err
}
}

cm := &v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-user-config",
Namespace: r.DBaaSReconciler.InstallNamespace,
},
csvList := &v1alpha1.ClusterServiceVersionList{}
if err := cli.List(ctx, csvList, k8sclient.InNamespace(r.InstallNamespace)); err != nil {
return err
}
if err := cli.Get(ctx, k8sclient.ObjectKeyFromObject(cm), cm); err != nil {
if apierrors.IsNotFound(err) {
cm.Data = map[string]string{
"AWS_REGION": "dummy",
"AWS_ENDPOINT_URL": "",
"ACK_ENABLE_DEVELOPMENT_LOGGING": "false",
"ACK_WATCH_NAMESPACE": "",
"ACK_LOG_LEVEL": "info",
"ACK_RESOURCE_TAGS": "rhoda",
for i := range csvList.Items {
csv := csvList.Items[i]
instanceCRD := false
clusterCRD := false
for _, crd := range csv.Spec.CustomResourceDefinitions.Owned {
if crd.Name == "dbinstances.rds.services.k8s.aws" && crd.Kind == "DBInstance" {
instanceCRD = true
} else if crd.Name == "dbclusters.rds.services.k8s.aws" && crd.Kind == "DBCluster" {
clusterCRD = true
}
}
if instanceCRD && clusterCRD {
if csv.Spec.Version.String() == ackRDSVersion {
continue
}
if err := cli.Create(ctx, cm); err != nil {
if err := cli.Delete(ctx, &csv); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when the dbaas-manger pod gets restarted? Would this cause the ack sub to be deleted?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. The ack-rds-controller will be deleted in this case.
Changed the code to also delete rds-dbaas-operator when starting dbaas-operator. Then the rds-dbaas-operator and ack-rds-controller will be re-installed when re-starting dbaas-operator.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that's a solution though. We can't have ack and rds being uninstalled/reinstalled every time the pod restarts... which could be often.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could check the version of the ack-rds-controller only delete it when the version is not right.
Otherwise, I am not sure if there is a good solution to the ack-rds-controller upgrade issues.

return err
}
} else {
return err
}
}

return nil
}

// Temporary solution to the rds-controller v0.1.3 upgrade issue
func (r *DBaaSPlatformReconciler) fixRDSControllerUpgrade(ctx context.Context) error {
csv := &v1alpha1.ClusterServiceVersion{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-controller.v0.1.3",
Namespace: r.DBaaSReconciler.InstallNamespace,
},
}
if err := r.Client.Get(ctx, k8sclient.ObjectKeyFromObject(csv), csv); err != nil {
if apierrors.IsNotFound(err) {
return nil
}
return err
}

if csv.Status.Phase == v1alpha1.CSVPhaseFailed && csv.Status.Reason == v1alpha1.CSVReasonComponentFailed &&
csv.Status.Message == "install strategy failed: Deployment.apps \"ack-rds-controller\" is invalid: spec.selector: "+
"Invalid value: v1.LabelSelector{MatchLabels:map[string]string{\"app.kubernetes.io/name\":\"ack-rds-controller\"}, "+
"MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable" {
ackDeployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-controller",
Namespace: r.DBaaSReconciler.InstallNamespace,
},
}
if err := r.Client.Delete(ctx, ackDeployment); err != nil {
return err
}
log.FromContext(ctx).Info("Applied fix to the failed RDS controller v0.1.3 installation")
}
return nil
}
150 changes: 0 additions & 150 deletions controllers/dbaasplatform_controller_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,10 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"

appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"

dbaasv1beta1 "github.com/RHEcosystemAppEng/dbaas-operator/api/v1beta1"
"github.com/operator-framework/api/pkg/operators/v1alpha1"
)

var _ = Describe("DBaaSPlatform controller", func() {
Expand Down Expand Up @@ -48,149 +43,4 @@ var _ = Describe("DBaaSPlatform controller", func() {
Expect(cr.Status.Conditions[0].Type).To(Equal(dbaasv1beta1.DBaaSPlatformReadyType))
})
})

Describe("install dummy secret and configmap for rds-controller upgrade", func() {
It("should find the dummy secret and configmap after installation", func() {
Eventually(func() bool {
secret := &v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-user-secrets", //#nosec G101
Namespace: testNamespace,
},
}
err := dRec.Get(ctx, client.ObjectKeyFromObject(secret), secret)
if err != nil {
return false
}
return true
}, timeout).Should(BeTrue())

Eventually(func() bool {
cm := &v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-user-config",
Namespace: testNamespace,
},
}
err := dRec.Get(ctx, client.ObjectKeyFromObject(cm), cm)
if err != nil {
return false
}
return true
}, timeout).Should(BeTrue())
})
})

Describe("delete deployment for rds-controller v0.1.3 upgrade", func() {
csv := &v1alpha1.ClusterServiceVersion{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-controller.v0.1.3",
Namespace: testNamespace,
},
Spec: v1alpha1.ClusterServiceVersionSpec{
DisplayName: "AWS Controllers for Kubernetes - Amazon RDS",
InstallStrategy: v1alpha1.NamedInstallStrategy{
StrategyName: "deployment",
StrategySpec: v1alpha1.StrategyDetailsDeployment{
DeploymentSpecs: []v1alpha1.StrategyDeploymentSpec{
{
Name: "ack-rds-controller",
Spec: appsv1.DeploymentSpec{
Replicas: pointer.Int32(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app.kubernetes.io/name": "ack-rds-controller",
},
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app.kubernetes.io/name": "ack-rds-controller",
},
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "controller",
Image: "quay.io/ecosystem-appeng/busybox",
ImagePullPolicy: v1.PullIfNotPresent,
Command: []string{"sh", "-c", "echo The app is running! && sleep 3600"},
},
},
},
},
},
},
},
},
},
},
}
BeforeEach(assertResourceCreation(csv))
AfterEach(assertResourceDeletion(csv))

rdsDeployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "ack-rds-controller",
Namespace: testNamespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: pointer.Int32(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"name": "ack-rds-controller",
},
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"name": "ack-rds-controller",
},
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "ack-rds-controller",
Image: "quay.io/ecosystem-appeng/busybox",
ImagePullPolicy: v1.PullIfNotPresent,
Command: []string{"sh", "-c", "echo The app is running! && sleep 3600"},
},
},
},
},
},
}
BeforeEach(assertResourceCreation(rdsDeployment))
AfterEach(assertResourceDeletionIfNotExists(rdsDeployment))

It("should delete the deployment for the csv error", func() {
By("making the csv in failed status")
Eventually(func() bool {
if err := dRec.Get(ctx, client.ObjectKeyFromObject(csv), csv); err != nil {
return false
}

csv.Status.Phase = v1alpha1.CSVPhaseFailed
csv.Status.Reason = v1alpha1.CSVReasonComponentFailed
csv.Status.Message = "install strategy failed: Deployment.apps \"ack-rds-controller\" is invalid: spec.selector: " +
"Invalid value: v1.LabelSelector{MatchLabels:map[string]string{\"app.kubernetes.io/name\":\"ack-rds-controller\"}, " +
"MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable"

if err := dRec.Status().Update(ctx, csv); err != nil {
return false
}
return true
}, timeout).Should(BeTrue())

By("checking if the deployment is deleted")
Eventually(func() bool {
if err := dRec.Get(ctx, client.ObjectKeyFromObject(rdsDeployment), rdsDeployment); err != nil {
if errors.IsNotFound(err) {
return true
}
}
return false
}, timeout).Should(BeTrue())
})
})
})
Loading