Skip to content

Latest commit

 

History

History
 
 

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

1-org

This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture that is described in the Google Cloud security foundations guide. The following table lists the parts of the guide.

0-bootstrap Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD Pipeline for foundations code in subsequent stages.
1-org (this file) Sets up top-level shared folders, monitoring and networking projects, and organization-level logging, and sets baseline security settings through organizational policy.
2-environments Sets up development, nonproduction, and production environments within the Google Cloud organization that you've created.
3-networks-dual-svpc Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub.
3-networks-hub-and-spoke Sets up base and restricted shared VPCs with all the default configuration found on step 3-networks-dual-svpc, but here the architecture will be based on the hub-and-spoke network model. It also sets up the global DNS hub.
4-projects Sets up a folder structure, projects, and application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage.
5-app-infra Deploy a Compute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects.

For an overview of the architecture and the parts, see the terraform-example-foundation README.

Purpose

The purpose of this step is to set up top-level shared folders, monitoring and networking projects, organization-level logging, and baseline security settings through organizational policies.

Prerequisites

  1. Run 0-bootstrap.
  2. To enable Security Command Center notifications, choose a Security Command Center tier and create and grant permissions for the Security Command Center service account as described in Setting up Security Command Center.

Troubleshooting

See troubleshooting if you run into issues during this step.

Usage

Disclaimer: This step enables Data Access logs for all services in your organization. Enabling Data Access logs might result in your project being charged for the additional logs usage. For details on costs you might incur, go to Pricing. You can choose not to enable the Data Access logs by setting the variable data_access_logs_enabled to false.

Consider the following:

  • This module creates a sink to export all logs to a Cloud Logging bucket. It also creates sinks to export a subset of security-related logs to Bigquery and Pub/Sub. This will result in additional charges for those copies of logs. For the log bucket destination, logs retained for the default retention period (30 days) don't incur a storage cost. You can change the filters and sinks by modifying the configuration in envs/shared/log_sinks.tf.

  • This module implements but does not enable bucket policy retention for organization logs. If needed, enable a retention policy by configuring the log_export_storage_retention_policy variable.

  • This module implements but does not enable object versioning for organization logs. If needed, enable object versioning by setting the log_export_storage_versioning variable to true.

  • Bucket policy retention and object versioning are mutually exclusive.

  • To use the hub-and-spoke architecture described in the Networking section of the Google Cloud security foundations guide, set the enable_hub_and_spoke variable to true.

  • If you are using MacOS, replace cp -RT with cp -R in the relevant commands. The -T flag is required for Linux, but causes problems for MacOS.

  • This module manages contacts for notifications using Essential Contacts. Essential Contacts are assigned at the parent (organization or folder) that you configure to be inherited by all child resources. You can also assign Essential Contacts directly to projects using the project-factory essential_contacts submodule. Billing notifications are sent to the group_billing_admins mandatory group. Legal and suspension notifications are sent to the group_org_admins mandatory group. If you provide all other groups, notifications are configured as described in the following table.

Group Notification Category Fallback Group
gcp_network_viewer Technical Org Admins
gcp_platform_viewer Product updates and technical Org Admins
gcp_scc_admin Product updates and security Org Admins
gcp_security_reviewer Security and technical Org Admins

This module creates and applies tags to common, network, and bootstrap folders. These tags are also applied to environment folders of step 2-environments. You can create your own tags by editing the local.tags map in tags.tf and following the commented template. The following table describes details about the tags that are applied to resources:

Resource Type Step Tag Key Tag Value
bootstrap folder 1-org environment bootstrap
common folder 1-org environment production
network folder 1-org environment production
enviroment development folder 2-environments environment development
enviroment nonproduction folder 2-environments environment nonproduction
enviroment production folder 2-environments environment production

Deploying with Cloud Build

  1. Clone the gcp-org repo based on the Terraform output from the 0-bootstrap step. Clone the repo at the same level of the terraform-example-foundation folder. If required, run terraform output cloudbuild_project_id in the 0-bootstrap folder to get the Cloud Build Project ID.

    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="terraform-example-foundation/0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    gcloud source repos clone gcp-org --project=${CLOUD_BUILD_PROJECT_ID}

    Note: The message warning: You appear to have cloned an empty repository. is normal and can be ignored.

  2. Navigate into the repo, change to a nonproduction branch, and copy the contents of foundation to the new repo. All subsequent steps assume you are running them from the gcp-org directory. If you run them from another directory, adjust your copy paths accordingly.

    cd gcp-org
    git checkout -b plan
    
    cp -RT ../terraform-example-foundation/1-org/ .
    cp ../terraform-example-foundation/build/cloudbuild-tf-* .
    cp ../terraform-example-foundation/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  3. Rename ./envs/shared/terraform.example.tfvars to ./envs/shared/terraform.tfvars

    mv ./envs/shared/terraform.example.tfvars ./envs/shared/terraform.tfvars
  4. Check if a Security Command Center notification with the default name, scc-notify, already exists. If it exists, choose a different value for the scc_notification_name variable in the ./envs/shared/terraform.tfvars file.

    export ORGANIZATION_ID=$(terraform -chdir="../terraform-example-foundation/0-bootstrap/" output -json common_config | jq '.org_id' --raw-output)
    gcloud scc notifications describe "scc-notify" --organization=${ORGANIZATION_ID}
  5. Check if your organization already has an Access Context Manager policy.

    export ACCESS_CONTEXT_MANAGER_ID=$(gcloud access-context-manager policies list --organization ${ORGANIZATION_ID} --format="value(name)")
    echo "access_context_manager_policy_id = ${ACCESS_CONTEXT_MANAGER_ID}"
  6. Update the envs/shared/terraform.tfvars file with values from your environment and 0-bootstrap step. If the previous step showed a numeric value, un-comment the variable create_access_context_manager_access_policy = false. See the shared folder README.md for additional information on the values in the terraform.tfvars file.

    export backend_bucket=$(terraform -chdir="../terraform-example-foundation/0-bootstrap/" output -raw gcs_bucket_tfstate)
    echo "remote_state_bucket = ${backend_bucket}"
    
    sed -i'' -e "s/REMOTE_STATE_BUCKET/${backend_bucket}/" ./envs/shared/terraform.tfvars
    
    if [ ! -z "${ACCESS_CONTEXT_MANAGER_ID}" ]; then sed -i'' -e "s=//create_access_context_manager_access_policy=create_access_context_manager_access_policy=" ./envs/shared/terraform.tfvars; fi
  7. Commit changes.

    git add .
    git commit -m 'Initialize org repo'
  8. Push your plan branch to trigger a plan for all environments. Because the plan branch is not a named environment branch, pushing your plan branch triggers terraform plan but not terraform apply. Review the plan output in your Cloud Build project. https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git push --set-upstream origin plan
  9. Merge changes to the production branch. Because the production branch is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project. https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b production
    git push origin production
  10. Proceed to the 2-environments step.

Troubleshooting: If you received a PERMISSION_DENIED error while running the gcloud access-context-manager or the gcloud scc notifications commands, you can append the following to run the command as the Terraform service account:

--impersonate-service-account=$(terraform -chdir="../terraform-example-foundation/0-bootstrap/" output -raw organization_step_terraform_service_account_email)

Deploying with Jenkins

See 0-bootstrap README-Jenkins.md.

Deploying with GitHub Actions

See 0-bootstrap README-GitHub.md.

Running Terraform locally

  1. The next instructions assume that you are at the same level of the terraform-example-foundation folder. Change into the 1-org folder, copy the Terraform wrapper script, and ensure it can be executed.

    cd terraform-example-foundation/1-org
    cp ../build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  2. Rename ./envs/shared/terraform.example.tfvars to ./envs/shared/terraform.tfvars.

    mv ./envs/shared/terraform.example.tfvars ./envs/shared/terraform.tfvars
  3. Check if a Security Command Center notification with the default name, scc-notify, already exists. If it exists, choose a different value for the scc_notification_name variable in the ./envs/shared/terraform.tfvars file.

    export ORGANIZATION_ID=$(terraform -chdir="../0-bootstrap/" output -json common_config | jq '.org_id' --raw-output)
    gcloud scc notifications describe "scc-notify" --organization=${ORGANIZATION_ID}
  4. Check if your organization already has an Access Context Manager policy.

    export ACCESS_CONTEXT_MANAGER_ID=$(gcloud access-context-manager policies list --organization ${ORGANIZATION_ID} --format="value(name)")
    echo "access_context_manager_policy_id = ${ACCESS_CONTEXT_MANAGER_ID}"
  5. Update the envs/shared/terraform.tfvars file with values from your environment and 0-bootstrap step. If the previous step showed a numeric value, un-comment the variable create_access_context_manager_access_policy = false. See the shared folder README.md for additional information on the values in the terraform.tfvars file.

    export backend_bucket=$(terraform -chdir="../0-bootstrap/" output -raw gcs_bucket_tfstate)
    echo "remote_state_bucket = ${backend_bucket}"
    
    sed -i'' -e "s/REMOTE_STATE_BUCKET/${backend_bucket}/" ./envs/shared/terraform.tfvars
    
    if [ ! -z "${ACCESS_CONTEXT_MANAGER_ID}" ]; then sed -i'' -e "s=//create_access_context_manager_access_policy=create_access_context_manager_access_policy=" ./envs/shared/terraform.tfvars; fi

You can now deploy your environment (production) using this script. When using Cloud Build or Jenkins as your CI/CD tool, each environment corresponding to a branch is the repository for 1-org step and only the corresponding environment is applied.

To use the validate option of the tf-wrapper.sh script, follow the instructions to install the terraform-tools component.

  1. Use terraform output to get the Cloud Build project ID and the organization step Terraform service account from 0-bootstrap output. An environment variable GOOGLE_IMPERSONATE_SERVICE_ACCOUNT will be set using the Terraform Service Account to enable impersonation.

    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="../0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../0-bootstrap/" output -raw organization_step_terraform_service_account_email)
    echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
  2. Run init and plan and review the output.

    ./tf-wrapper.sh init production
    ./tf-wrapper.sh plan production
  3. Run validate and resolve any violations.

    ./tf-wrapper.sh validate production $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  4. Run apply production.

    ./tf-wrapper.sh apply production

If you receive any errors or made any changes to the Terraform config or terraform.tfvars, re-run ./tf-wrapper.sh plan production before you run ./tf-wrapper.sh apply production.

Before executing the next stages, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT environment variable.

unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT