Skip to content

Commit

Permalink
do the remaining helmvm -> embedded-cluster renamings (#153)
Browse files Browse the repository at this point in the history
  • Loading branch information
laverya authored Oct 26, 2023
1 parent 355e095 commit 19f4027
Show file tree
Hide file tree
Showing 25 changed files with 131 additions and 128 deletions.
27 changes: 15 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
# HelmVM platform
# Embedded Cluster platform

This repository houses a cluster installation prototype that utilizes the k0s and k0sctl platforms. It showcases an alternative approach to deploying clusters and serves as a starting point for further exploration and advancement. In HelmVM, all components and functionalities are consolidated into a single binary, this binary facilitates a streamlined cluster installation process, removing the need for external dependencies (rpms, debs, etc). Remote hosts are managed using SSH.
This repository houses a cluster installation prototype that utilizes the k0s and k0sctl platforms.
It showcases an alternative approach to deploying clusters and serves as a starting point for further exploration and advancement.
In Embedded Cluster, all components and functionalities are consolidated into a single binary, this binary facilitates a streamlined cluster installation process, removing the need for external dependencies (rpms, debs, etc).
Remote hosts are managed using SSH.

HelmVM includes by default the Kots Admin Console and the OpenEBS Storage provisioner, you can very easily embed your own Helm Chart to the binary.
Embedded Cluster includes by default the Kots Admin Console and the OpenEBS Storage provisioner, you can very easily embed your own Helm Chart to the binary.

## Building and running

Expand All @@ -17,7 +20,7 @@ You can also build binaries for other architectures with the following targets:

## Single node deployment

To create a single node deployment you can upload the HelmVM binary to a Linux x86_64 machine and run:
To create a single node deployment you can upload the Embedded Cluster binary to a Linux x86_64 machine and run:

```
$ ./embedded-cluster install
Expand All @@ -35,7 +38,7 @@ In this case, it's not necessary to execute this command exclusively on a Linux

## Deploying Individual Nodes

HelmVM also facilitates deploying individual nodes through the use of tokens, deviating from the centralized approach.
Embedded Cluster also facilitates deploying individual nodes through the use of tokens, deviating from the centralized approach.
To follow this path, you need to exclude yourself from the centralized management facilitated via SSH.

### Installing a Multi-Node Setup using Token-Based Deployment
Expand Down Expand Up @@ -73,18 +76,18 @@ Copy the command provided and run it on the server you wish to join to the clust
server-1# embedded-cluster node join --role "controller" "<token redacted>"
```

For this to function, you must ensure that the HelmVM binary is present on all nodes within the cluster.
For this to function, you must ensure that the Embedded Cluster binary is present on all nodes within the cluster.


### Upgrading clusters

If your installation employs centralized management, simply download the newer version of HelmVM and execute:
If your installation employs centralized management, simply download the newer version of Embedded Cluster and execute:

```
$ embedded-cluster apply
```

For installations without centralized management, download HelmVM, upload it to each server in your cluster, and execute the following command as **root** on each server:
For installations without centralized management, download Embedded Cluster, upload it to each server in your cluster, and execute the following command as **root** on each server:

```
# embedded-cluster node upgrade
Expand Down Expand Up @@ -116,14 +119,14 @@ ubuntu@ip-172-16-10-242:~/.embedded-cluster/etc$

## Embedding your own Helm Chart

HelmVM allows you to embed your own Helm Charts so they are installed by default when the cluster is installed or updated. For sake of documenting this let's create a hypothetical scenario: you have a software called `rocks` that is packaged as a Helm Chart and is ready to be installed in any Kubernetes Cluster.
Embedded Cluster allows you to embed your own Helm Charts so they are installed by default when the cluster is installed or updated. For sake of documenting this let's create a hypothetical scenario: you have a software called `rocks` that is packaged as a Helm Chart and is ready to be installed in any Kubernetes Cluster.

Your Helm Chart is in a file called `rocks-1.0.0.tgz` and you already have a copy of HelmVM binary in your $PATH. To embed your Chart you can run:
Your Helm Chart is in a file called `rocks-1.0.0.tgz` and you already have a copy of mbedded Cluster binary in your $PATH. To embed your Chart you can run:

```
$ embedded-cluster embed --chart rocks-1.0.0.tgz --output rocks
```
This command will create a binary called `rocks` in the current directory, this command is a copy of HelmVM binary with your Helm Chart embedded into it. You can then use the `rocks` binary to install a cluster that automatically deploys your `rocks-1.0.0.tgz` Helm Chart.
This command will create a binary called `rocks` in the current directory, this command is a copy of mbedded Cluster binary with your Helm Chart embedded into it. You can then use the `rocks` binary to install a cluster that automatically deploys your `rocks-1.0.0.tgz` Helm Chart.

If you want to provide a customised `values.yaml` during the Helm Chart installation you can also embed it into the binary. You can do that with the following command:

Expand All @@ -148,7 +151,7 @@ $ embedded-cluster embed \

## Miscellaneous

HelmVM stores its data under `$HOME/.embedded-cluster` directory, you may want to create a backup of the directory, specially the `$HOME/.embedded-cluster/etc` directory. Inside the `$HOME/.embedded-cluster/etc` directory you will find the `k0sctl.yaml` and the `kubeconfig` files, the first is used when installing or upgrading a cluster and the latter is used when accessing the cluster with `kubectl` (a copy of `kubectl` is also kept under `$HOME/.embedded-cluster/bin` directory and you may want to include it into your PATH).
mbedded Cluster stores its data under `$HOME/.embedded-cluster` directory, you may want to create a backup of the directory, specially the `$HOME/.embedded-cluster/etc` directory. Inside the `$HOME/.embedded-cluster/etc` directory you will find the `k0sctl.yaml` and the `kubeconfig` files, the first is used when installing or upgrading a cluster and the latter is used when accessing the cluster with `kubectl` (a copy of `kubectl` is also kept under `$HOME/.embedded-cluster/bin` directory and you may want to include it into your PATH).

If you want to use an already existing `k0sctl.yaml` configuration during the `install` command you can do so by using the `--config` flag.

Expand Down
4 changes: 2 additions & 2 deletions cmd/embedded-cluster/install.go
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ func runK0sctlApply(ctx context.Context) error {
message = fmt.Sprintf("Phase: %s", message)
return message
}
bin := defaults.PathToHelmVMBinary("k0sctl")
bin := defaults.PathToEmbeddedClusterBinary("k0sctl")
loading := pb.Start(pb.WithMask(mask))
defer func() {
loading.Closef("Finished applying cluster configuration")
Expand All @@ -317,7 +317,7 @@ func runK0sctlApply(ctx context.Context) error {
// under a file called "kubeconfig" inside defaults.ConfigSubDir(). XXX File
// is overwritten, no questions asked.
func runK0sctlKubeconfig(ctx context.Context) error {
bin := defaults.PathToHelmVMBinary("k0sctl")
bin := defaults.PathToEmbeddedClusterBinary("k0sctl")
cfgpath := defaults.PathToConfig("k0sctl.yaml")
if _, err := os.Stat(cfgpath); err != nil {
return fmt.Errorf("cluster configuration not found")
Expand Down
6 changes: 3 additions & 3 deletions cmd/embedded-cluster/node.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ var nodeStopCommand = &cli.Command{
}
kcfg := defaults.PathToConfig("kubeconfig")
os.Setenv("KUBECONFIG", kcfg)
bin := defaults.PathToHelmVMBinary("kubectl")
bin := defaults.PathToEmbeddedClusterBinary("kubectl")
cmd := exec.Command(bin, "drain", "--ignore-daemonsets", node)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
Expand All @@ -51,7 +51,7 @@ var nodeStartCommand = &cli.Command{
}
kcfg := defaults.PathToConfig("kubeconfig")
os.Setenv("KUBECONFIG", kcfg)
bin := defaults.PathToHelmVMBinary("kubectl")
bin := defaults.PathToEmbeddedClusterBinary("kubectl")
cmd := exec.Command(bin, "uncordon", node)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
Expand All @@ -66,7 +66,7 @@ var nodeListCommand = &cli.Command{
Action: func(c *cli.Context) error {
kcfg := defaults.PathToConfig("kubeconfig")
os.Setenv("KUBECONFIG", kcfg)
bin := defaults.PathToHelmVMBinary("kubectl")
bin := defaults.PathToEmbeddedClusterBinary("kubectl")
cmd := exec.Command(bin, "get", "nodes", "-o", "wide")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
Expand Down
2 changes: 1 addition & 1 deletion cmd/embedded-cluster/shell.go
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ var shellCommand = &cli.Command{
config := fmt.Sprintf("export KUBECONFIG=%q\n", kcpath)
_, _ = shellpty.WriteString(config)
_, _ = io.CopyN(io.Discard, shellpty, int64(len(config)+1))
bindir := defaults.HelmVMBinsSubDir()
bindir := defaults.EmbeddedClusterBinsSubDir()
config = fmt.Sprintf("export PATH=\"$PATH:%s\"\n", bindir)
_, _ = shellpty.WriteString(config)
_, _ = io.CopyN(io.Discard, shellpty, int64(len(config)+1))
Expand Down
4 changes: 2 additions & 2 deletions cmd/embedded-cluster/upgrade.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import (
"github.com/replicatedhq/embedded-cluster/pkg/prompts"
)

func stopHelmVM() error {
func stopEmbeddedCluster() error {
cmd := exec.Command("k0s", "stop")
stdout := bytes.NewBuffer(nil)
stderr := bytes.NewBuffer(nil)
Expand Down Expand Up @@ -121,7 +121,7 @@ var upgradeCommand = &cli.Command{
return err
}
logrus.Infof("Stopping %s", defaults.BinaryName())
if err := stopHelmVM(); err != nil {
if err := stopEmbeddedCluster(); err != nil {
err := fmt.Errorf("unable to stop: %w", err)
metrics.ReportNodeUpgradeFailed(c.Context, err)
return err
Expand Down
20 changes: 10 additions & 10 deletions e2e/cluster/cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,14 +43,14 @@ func init() {
// Input are the options passed in to the cluster creation plus some data
// for internal consumption only.
type Input struct {
Nodes int
SSHPublicKey string
SSHPrivateKey string
HelmVMPath string
Image string
network string
T *testing.T
id string
Nodes int
SSHPublicKey string
SSHPrivateKey string
EmbeddedClusterPath string
Image string
network string
T *testing.T
id string
}

// File holds information about a file that must be uploaded to a node.
Expand Down Expand Up @@ -192,7 +192,7 @@ func CopyFilesToNode(in *Input, node string) {
Mode: 0600,
},
{
SourcePath: in.HelmVMPath,
SourcePath: in.EmbeddedClusterPath,
DestPath: "/usr/local/bin/embedded-cluster",
Mode: 0755,
},
Expand Down Expand Up @@ -407,7 +407,7 @@ func CreateProfile(in *Input) {
request := api.ProfilesPost{
Name: fmt.Sprintf("profile-%s", in.id),
ProfilePut: api.ProfilePut{
Description: fmt.Sprintf("HelmVM test cluster (%s)", in.id),
Description: fmt.Sprintf("Embedded Cluster test cluster (%s)", in.id),
Config: map[string]string{
"raw.lxc": profileConfig,
},
Expand Down
12 changes: 6 additions & 6 deletions e2e/embed_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ import (
func AndInstall(t *testing.T) {
t.Parallel()
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 1,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 1,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
t.Log("installing ssh in node 0")
Expand Down
84 changes: 42 additions & 42 deletions e2e/install_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ import (
func TestSingleNodeInstallation(t *testing.T) {
t.Parallel()
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 1,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 1,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
t.Log("installing ssh on node 0")
Expand All @@ -35,12 +35,12 @@ func TestSingleNodeInstallation(t *testing.T) {
func TestSingleNodeInstallationRockyLinux8(t *testing.T) {
t.Parallel()
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 1,
Image: "rockylinux/8",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 1,
Image: "rockylinux/8",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
t.Log("installing ssh on node 0")
Expand All @@ -62,12 +62,12 @@ func TestSingleNodeInstallationRockyLinux8(t *testing.T) {
func TestSingleNodeInstallationDebian12(t *testing.T) {
t.Parallel()
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 1,
Image: "debian/12",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 1,
Image: "debian/12",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
t.Log("installing ssh on node 0")
Expand All @@ -89,12 +89,12 @@ func TestSingleNodeInstallationDebian12(t *testing.T) {
func TestSingleNodeInstallationCentos8Stream(t *testing.T) {
t.Parallel()
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 1,
Image: "centos/8-Stream",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 1,
Image: "centos/8-Stream",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
t.Log("installing ssh on node 0")
Expand All @@ -117,12 +117,12 @@ func TestMultiNodeInteractiveInstallation(t *testing.T) {
t.Parallel()
t.Log("creating cluster")
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 3,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 3,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
for i := range tc.Nodes {
Expand Down Expand Up @@ -155,12 +155,12 @@ func TestMultiNodeInteractiveInstallation(t *testing.T) {
func TestInstallWithDisabledAddons(t *testing.T) {
t.Parallel()
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 1,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 1,
Image: "ubuntu/jammy",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
t.Log("installing ssh in node 0")
Expand All @@ -181,12 +181,12 @@ func TestInstallWithDisabledAddons(t *testing.T) {
func TestHostPreflight(t *testing.T) {
t.Parallel()
tc := cluster.NewTestCluster(&cluster.Input{
T: t,
Nodes: 1,
Image: "centos/8-Stream",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
HelmVMPath: "../output/bin/embedded-cluster",
T: t,
Nodes: 1,
Image: "centos/8-Stream",
SSHPublicKey: "../output/tmp/id_rsa.pub",
SSHPrivateKey: "../output/tmp/id_rsa",
EmbeddedClusterPath: "../output/bin/embedded-cluster",
})
defer tc.Destroy()
t.Log("installing ssh and binutils on node 0")
Expand Down
2 changes: 1 addition & 1 deletion e2e/scripts/addons-only.sh
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ main() {
fi
}

export HELMVM_METRICS_BASEURL="https://staging.replicated.app"
export EMBEDDED_CLUSTER_METRICS_BASEURL="https://staging.replicated.app"
export KUBECONFIG=/root/.config/.embedded-cluster/etc/kubeconfig
export PATH=$PATH:/root/.config/.embedded-cluster/bin
main
2 changes: 1 addition & 1 deletion e2e/scripts/embed-and-install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ main() {
fi
}

export HELMVM_METRICS_BASEURL="https://staging.replicated.app"
export EMBEDDED_CLUSTER_METRICS_BASEURL="https://staging.replicated.app"
export KUBECONFIG=/root/.config/.embedded-cluster/etc/kubeconfig
export PATH=$PATH:/root/.config/.embedded-cluster/bin
main
2 changes: 1 addition & 1 deletion e2e/scripts/embedded-preflight.sh
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ main() {
fi
}

export HELMVM_METRICS_BASEURL="https://staging.replicated.app"
export EMBEDDED_CLUSTER_METRICS_BASEURL="https://staging.replicated.app"
export KUBECONFIG=/root/.config/.embedded-cluster/etc/kubeconfig
export PATH=$PATH:/root/.config/.embedded-cluster/bin
main
2 changes: 1 addition & 1 deletion e2e/scripts/install-with-disabled-addons.sh
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ main() {
fi
}

export HELMVM_METRICS_BASEURL="https://staging.replicated.app"
export EMBEDDED_CLUSTER_METRICS_BASEURL="https://staging.replicated.app"
export KUBECONFIG=/root/.config/.embedded-cluster/etc/kubeconfig
export PATH=$PATH:/root/.config/.embedded-cluster/bin
main
4 changes: 2 additions & 2 deletions e2e/scripts/interactive-multi-node-install.exp
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ proc configure_node {address} {
expect "Type one of the options above:" { send "/root/.ssh/id_rsa\r" }
}

set env(HELMVM_METRICS_BASEURL) "https://staging.replicated.app"
set env(HELMVM_PLAIN_PROMPTS) "true"
set env(EMBEDDED_CLUSTER_METRICS_BASEURL) "https://staging.replicated.app"
set env(EMBEDDED_CLUSTER_PLAIN_PROMPTS) "true"
spawn embedded-cluster install --multi-node
configure_node "10.0.0.2"
expect -re "Add another node?.*:" { send "y\r" }
Expand Down
Loading

0 comments on commit 19f4027

Please sign in to comment.