Skip to content

Commit

Permalink
Update docs to reflect new directory structure
Browse files Browse the repository at this point in the history
Signed-off-by: Jacob Weinstock <[email protected]>
  • Loading branch information
jacobweinstock committed Oct 25, 2022
1 parent ab0e8db commit 07a0c3e
Show file tree
Hide file tree
Showing 6 changed files with 23 additions and 23 deletions.
28 changes: 14 additions & 14 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ Its goal is to be **_"The easiest way to setup the Tinkerbell Stack"_**.
There are two major areas of responsibility.

1. Stand up of the Tinkerbell application stack
- Handled by Docker Compose ([deploy/compose/docker-compose.yml](../deploy/compose/docker-compose.yml))
- Handled by Docker Compose ([deploy/stack/compose/docker-compose.yml](../deploy/stack/compose/docker-compose.yml))
2. Stand up of infrastructure to support the Tinkerbell application stack
- [Vagrant](../deploy/vagrant/Vagrantfile)
- [Terraform](../deploy/terraform/main.tf)
- [Vagrant](../deploy/infrastructure/vagrant/Vagrantfile)
- [Terraform](../deploy/infrastructure/terraform/main.tf)

## Architecture

Expand Down Expand Up @@ -61,29 +61,29 @@ The sandbox architecture can be broken down into 3 distinct groups.

3. Single Run Services
- Tink Record Creation
- This [script](../deploy/compose/create-tink-records/create.sh) that creates Tink records from templated files ([hardware](../deploy/compose/create-tink-records/manifests/hardware), [template](../deploy/compose/create-tink-records/manifests/template), and workflow)
- This [script](../deploy/stack/compose/create-tink-records/create.sh) that creates Tink records from templated files ([hardware](../deploy/stack/compose/create-tink-records/manifests/hardware), [template](../deploy/stack/compose/create-tink-records/manifests/template), and workflow)
- Tink DB Migrations
- Builtin functionality to the Tink Server binary that will create DB schemas, tables, etc
- TLS Setup
- This [script](../deploy/compose/generate-tls-certs/generate.sh) that handles creating the self-signed TLS certificates for the Tink Server and the Container Registry (the same certs are shared for both).
Valid domain names are defined in the [csr.json](../deploy/compose/generate-tls-certs/csr.json) file. By default the value of `TINKERBELL_HOST_IP` in the [.env](../deploy/compose/.env) file is added as a valid domain name.
This addition happens via the [generate.sh](../deploy/compose/generate-tls-certs/generate.sh) script.
- This [script](../deploy/stack/compose/generate-tls-certs/generate.sh) that handles creating the self-signed TLS certificates for the Tink Server and the Container Registry (the same certs are shared for both).
Valid domain names are defined in the [csr.json](../deploy/stack/compose/generate-tls-certs/csr.json) file. By default the value of `TINKERBELL_HOST_IP` in the [.env](../deploy/stack/compose/.env) file is added as a valid domain name.
This addition happens via the [generate.sh](../deploy/stack/compose/generate-tls-certs/generate.sh) script.
- Registry Auth
- This container (named `registry-auth` in the [docker-compose.yml](../deploy/compose/docker-compose.yml)) creates the user/passwd pair for use to login to the container registry.
- This container (named `registry-auth` in the [docker-compose.yml](../deploy/stack/compose/docker-compose.yml)) creates the user/passwd pair for use to login to the container registry.
Defaults to admin/Admin1234.
These can be customized by setting `TINKERBELL_REGISTRY_USERNAME` and `TINKERBELL_REGISTRY_PASSWORD` in the [.env](../deploy/compose/.env) file.
These can be customized by setting `TINKERBELL_REGISTRY_USERNAME` and `TINKERBELL_REGISTRY_PASSWORD` in the [.env](../deploy/stack/compose/.env) file.
- Registry Image Population
- This [script](../deploy/compose/sync-images-to-local-registry/upload.sh) uploads images to the local/internal container registry, including the tink-worker image.
Any image needed in a workflow will need to be added to the [registry_images.txt](../deploy/compose/sync-images-to-local-registry/registry_images.txt) file.
The [registry_images.txt](../deploy/compose/sync-images-to-local-registry/registry_images.txt) file should not contain a final newline and each line must have the form of `<image_name>space<image_name>`
- This [script](../deploy/stack/compose/sync-images-to-local-registry/upload.sh) uploads images to the local/internal container registry, including the tink-worker image.
Any image needed in a workflow will need to be added to the [registry_images.txt](../deploy/stack/compose/sync-images-to-local-registry/registry_images.txt) file.
The [registry_images.txt](../deploy/stack/compose/sync-images-to-local-registry/registry_images.txt) file should not contain a final newline and each line must have the form of `<image_name>space<image_name>`
```bash
quay.io/tinkerbell/tink-worker:latest tink-worker:latest
```
- Hook Setup
- This [script](../deploy/compose/fetch-osie/fetch.sh) handles downloading Hook, extracting it, and placing it in the path ([deploy/compose/state/misc/osie/current](../deploy/compose/state/misc/osie/current)) that the compose service `osie-bootloader` uses for serving files.
- This [script](../deploy/stack/compose/fetch-osie/fetch.sh) handles downloading Hook, extracting it, and placing it in the path ([deploy/stack/compose/state/misc/osie/current](../deploy/stack/compose/state/misc/osie/current)) that the compose service `osie-bootloader` uses for serving files.
FYI, currently only an x86_64 Hook is published so only x86_64 machines can be provisioned with the sandbox using Hook.
- Ubuntu Image Setup
- This [script](../deploy/compose/fetch-and-convert-ubuntu-img/fetch.sh) handles downloading the Ubuntu focal cloud `.img` file and [converting it to a raw image](https://docs.tinkerbell.org/deploying-operating-systems/examples-ubuntu/).
- This [script](../deploy/stack/compose/fetch-and-convert-ubuntu-img/fetch.sh) handles downloading the Ubuntu focal cloud `.img` file and [converting it to a raw image](https://docs.tinkerbell.org/deploying-operating-systems/examples-ubuntu/).
This will be used with workflow action [`quay.io/tinkerbell-actions/image2disk:v1.0.0`](https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk).

## Prerequisites
Expand Down
4 changes: 2 additions & 2 deletions docs/quickstarts/COMPOSE.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@ You will need to bring your own machines to provision.
export TINKERBELL_CLIENT_MAC=08:00:27:9E:F5:3A
```

> Modify the [hardware.yaml](../../deploy/compose/manifests/hardware.yaml), as needed, for your machine.
> Modify the [hardware.yaml](../../deploy/stack/compose/manifests/hardware.yaml), as needed, for your machine.
4. Start the provisioner

```bash
cd deploy/compose
cd deploy/stack/compose
docker compose up -d
# This process will take about 5-10 minutes depending on your internet connection.
# Hook (OSIE) is about 400MB in size and the Ubuntu Focal image is about 500MB
Expand Down
2 changes: 1 addition & 1 deletion docs/quickstarts/TERRAFORMEM.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This option will also show you how to create a machine to provision.
2. Set your Equinix Metal project id and access token

```bash
cd deploy/terraform
cd deploy/infrastructure/terraform
cat << EOF > terraform.tfvars
metal_api_token = "awegaga4gs4g"
project_id = "235-23452-245-345"
Expand Down
4 changes: 2 additions & 2 deletions docs/quickstarts/VAGRANTLVIRT.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ This option will also show you how to create a machine to provision.
==> provisioner: Creating image (snapshot of base box volume).
==> provisioner: Creating domain with the following settings...
==> provisioner: -- Name: vagrant_provisioner
==> provisioner: -- Description: Source: /home/tink/repos/tinkerbell/sandbox/deploy/vagrant/Vagrantfile
==> provisioner: -- Description: Source: /home/tink/repos/tinkerbell/sandbox/deploy/infrastructure/vagrant/Vagrantfile
==> provisioner: -- Domain type: kvm
==> provisioner: -- Cpus: 2
==> provisioner: -- Feature: acpi
Expand Down Expand Up @@ -80,7 +80,7 @@ This option will also show you how to create a machine to provision.
provisioner: Removing insecure key from the guest if it's present...
provisioner: Key inserted! Disconnecting and reconnecting using new SSH key...
==> provisioner: Machine booted and ready!
==> provisioner: Rsyncing folder: /home/tink/repos/tinkerbell/sandbox/deploy/compose/ => /sandbox/compose
==> provisioner: Rsyncing folder: /home/tink/repos/tinkerbell/sandbox/deploy/stack/compose/ => /sandbox/compose
==> provisioner: Running provisioner: shell...
provisioner: Running: /tmp/vagrant-shell20221004-689177-1x7ep6c.sh
provisioner: + main 192.168.56.4 192.168.56.43 08:00:27:9e:f5:3a /sandbox/compose
Expand Down
4 changes: 2 additions & 2 deletions docs/quickstarts/VAGRANTVBOX.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This option will also show you how to create a machine to provision.
2. Start the provisioner

```bash
cd deploy/vagrant
cd deploy/infrastructure/vagrant
vagrant up
# This process will take about 5-10 minutes depending on your internet connection.
# OSIE is about 2GB in size and the Ubuntu Focal image is about 500MB
Expand Down Expand Up @@ -57,7 +57,7 @@ This option will also show you how to create a machine to provision.
==> provisioner: Machine booted and ready!
==> provisioner: Checking for guest additions in VM...
==> provisioner: Mounting shared folders...
provisioner: /sandbox/compose => /private/tmp/sandbox/deploy/compose
provisioner: /sandbox/compose => /private/tmp/sandbox/deploy/stack/compose
==> provisioner: Running provisioner: shell...
provisioner: Running: /var/folders/xt/8w5g0fv54tj4njvjhk_0_25r0000gr/T/vagrant-shell20221004-97370-3zoxlv.sh
provisioner: + main 192.168.56.4 192.168.56.43 08:00:27:9e:f5:3a /sandbox/compose
Expand Down
4 changes: 2 additions & 2 deletions test/vagrant/vagrant_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ func TestVagrantSetupGuide(t *testing.T) {
machine, err := vagrant.Up(ctx,
vagrant.WithLogger(t.Logf),
vagrant.WithMachineName("provisioner"),
vagrant.WithWorkdir("../../deploy/vagrant"),
vagrant.WithWorkdir("../../deploy/infrastructure/vagrant"),
)
if err != nil {
t.Fatal(err)
Expand Down Expand Up @@ -118,7 +118,7 @@ func TestVagrantSetupGuide(t *testing.T) {
worker, err := vagrant.Up(ctx,
vagrant.WithLogger(t.Logf),
vagrant.WithMachineName("worker"),
vagrant.WithWorkdir("../../deploy/vagrant"),
vagrant.WithWorkdir("../../deploy/infrastructure/vagrant"),
vagrant.RunAsync(),
)
if err != nil {
Expand Down

0 comments on commit 07a0c3e

Please sign in to comment.