Skip to content

Commit

Permalink
added exception when cluster.fqdn missing (#85)
Browse files Browse the repository at this point in the history
* added exception when cluster.fqdn missing

* [RELEASE] - Release version 1.11.8

* added option to continue with current context

* added exception error msg

* Revert "[RELEASE] - Release version 1.11.8"

This reverts commit 29a5c57.
  • Loading branch information
jewee authored Oct 18, 2019
1 parent 03abc80 commit 4de48ad
Showing 1 changed file with 22 additions and 0 deletions.
22 changes: 22 additions & 0 deletions src/ops/cli/helmfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,10 @@

import logging
import os
import six
import sys

from kubernetes import config
from ops.cli.parser import SubParserConfig
from ops.hierarchical.composition_config_generator import CompositionConfigGenerator

Expand Down Expand Up @@ -87,6 +90,25 @@ def setup_kube_config(self, data):
file_location = self.generate_eks_kube_config(
cluster_name, aws_profile, region)
os.environ['KUBECONFIG'] = file_location
else:
logger.info('cluster.fqdn key missing in cluster definition'

This comment has been minimized.

Copy link
@amuraru

amuraru Oct 19, 2019

Contributor

@jewee I missed this PR. I don't think we should ever fallback to current ctx. Prompting the user is overly complex imo.
Lets just fail if ctx cannot be created out of config.

The other qn I have is: how the cluster.fqn can be missing? Is this actually a config the user can override? It shouldnt and we need to generate this fqn out of others fields automatically

This comment has been minimized.

Copy link
@jewee

jewee Oct 21, 2019

Author Contributor

my initial commit was just that, simple exception when the expected key was missing (9724f25). After talking with @costimuraru we decided to let the possibility to run helmfile with the default ctx in case ops is used outside our eks flow and add this confirmation gateway. If @costimuraru is also ok we can go the simpler but stricter route to just error exit in case we don't have the key.

For the second question cluster.fqdn is missing from the generated helmfile due to filtering added by @danielcoman where only helm map as finally rendered in the helmfile composition. We can keep this filtering since the only key we need is cluster.fqdn and this exists actually (as pointed out by @danielcoman) under the key helm.global.fqdn key. I will make the changes to verify this key instead of cluster.fqdn in ops.

This comment has been minimized.

Copy link
@amuraru

amuraru Oct 21, 2019

Contributor

thanks @jewee for the context

For 1. I see - it makes sense to run ops /path/to/cluster/ helm ... on non-eks cluster I agree but I would still require the ctx of the target cluster to be well defined and required for this.

For 2. - yes - please file an issue and let's fix this asap

'\n unable to generate EKS kubeconfig\n '
'looking for system kubernetes context')

try:
contexts, active_context = config.list_kube_config_contexts()
except Exception as ex:
logger.error('could not find system kubeconfig context')
logger.error(ex)
sys.exit()

logger.info('current default context is:\n %s '
'\n do you want to proceed with this context?',
active_context)
answer = six.moves.input("Only 'yes' will be accepted to approve.\n Enter a value:")
if answer != "yes":
sys.exit()


def generate_eks_kube_config(self, cluster_name, aws_profile, region):
file_location = self.get_tmp_file()
Expand Down

0 comments on commit 4de48ad

Please sign in to comment.