Workspaces

Note

Workspaces can be created as code through the Scalr Terraform Provider.

Introduction

A workspace is where all objects related to Terraform managed resource are stored and managed. This is where you can find state, update the Terraform version, manage variables, see run history and much more. Workspaces can be as simple as a single resource or can be a complex monolithic configuration, it all depends on your use case and the directories that the workspace is linked to. A workspace can be integrated with your existing workflow whether it is a GitOps approach through a VCS provider, using the native Terraform CLI, or deploying modules directly in the UI. All workspace types follow the same pipeline:

Plan, which allows users to view the planned creation, update, or destruction of resource through the standard console output or detailed plan view. The details plan will alert users of destructive changes and makes it easier to search the plan when there are many resources affected. It is also the section to check who approved an apply and comments associated with the approval.

_images/visual_dry_runs.png

Cost Estimate, which will show you the estimated cost for the resources that are being created. This information can be used for writing a policy to check cost NEWWIN.

_images/cost_estimation.png

Policy Check, which is used to check the Terraform plan JSON output against Open Policy Agent policies. This step can be used to enforce you company standards.

_images/policy_check_run.png

Apply, which will actually create, update, or destroy the resources based on the Terraform configuration file.

_images/run_apply.png

VCS Integrated Workspace

Scalr workspaces can be linked to Terraform configurations held in VCS repositories in order to automate pull request (PR) checks and deployments.

By linking a workspace to a specific branch of the repository a webhook is created in the VCS provider that will POST to Scalr for every PR or commit/merge that affects that branch. Scalr will then automatically perform a “dry run” (terraform plan) for every PR and a run (terraform apply) for every commit/merge.

There are two basic steps to get started with a VCS integrated workspace:

  1. Create the workspace

  2. Set the variables (if necessary)

Create Workspace

To enable a VCS based workspace, create a workspace linked to a specific repo and branch:

_images/new_ws_vcs.png

Workspaces can be updated with the following settings:

  • Branch

    • The branch of the repository that the workspace should point to.

  • Configuration Version Root

    • Set this field if the repository contains multiple Terraform configurations in sub-directories. Terraform will only run and change resources based on a change to a specific sub-directory.

    • By default, runs are always triggered inside the configuration version root. To override that behavior - use trigger prefixes (see below).

  • Terraform Work Directory

    • This is where Terraform actually runs.

    • This directory must be a subdirectory of the top level of the repository or of the subdirectory if specified.

  • Triggers

    • If there are multiple folders, other than just the working directory, that need to trigger a run based on a change.

    • Trigger prefixes require working directory to be set. And that working directory is always a trigger prefix.

At this point, a run can be triggered, but if your configuration requires variables to be set, there will be a prompt for that.

Set Terraform Variables

Note

Variables can be created as code through the Scalr Terraform Provider.

Terraform variables can be set in the Terraform code or within the Scalr UI. If the configuration contains variables that do not have assigned values, then these must be assigned values in then Scalr workspace via the UI. Scalr will automatically create the variables.

_images/ws_vars_values.png

If the local workspace contains any *.auto.tfvars files these will provide default variable values that Terraform will automatically use.

If variables in the *.auto.tfvars files have the same names as variables specified in the workspace, the predefined workspace values will be used. For map variables the values in *.auto.tfvars are merged with values in the same named variable in the workspace.

Set Shell Variables

Note

Variables can be created as code through the Scalr Terraform Provider.

If the Terraform configuration utilizes shell variables (export var=value), e.g. for credentials and other provider parameters, these must be set in Scalr.

Shell variables can be set at all levels in Scalr and are inherited by lower levels. Use environment or account level for shell variables that are needed in multiple workspaces or environments. Use workspace level for shell variables that are specific to individual Terraform configurations.

It is also possible to use the Scalr provider to pull output from one workspace and post it as an environment or account shell variable to make it easily available to all other workspaces.

_images/ms_var_4.png

See Variables for full details.

Dry Runs

Scalr will automatically kick off dry runs when a PR is opened against a branch that is linked to a Scalr workspace. Scalr will report the checks back into your VCS provider.

_images/vcs_dry_1.png

Runs

All runs for a given workspace will be displayed in the runs tab. For VCS driven runs, a commit hash is provided, which can be clicked on to help users understand what changes were made to the code prior to the deployment for the entire history of the workspace.

_images/vcs_runs.png

Video

More of a visual learner? Check out this feature on YouTube NEWWIN.


CLI Driven Workspace

If an existing workflow already exists that utilizes the native Terraform CLI or you just want the flexibility to use it, you can continue to use the CLI as it is fully supported in Scalr. Scalr will execute the runs in a container in the Scalr backend, but the logs and output will still be sent back to your console.

To utilize Scalr as a remote backed there are a few simple steps:

  1. Obtain an API token from Scalr

  2. Add a backed configuration to a Terraform configuration

  3. Create the workspace in the UI or through terraform init

API Token

  1. In the environment create a token by clicking on your profile:

    _images/api_token.png
  2. Add the token to the CLI Configuration file:

Windows : terraform.rc in %APPDATA% directory.

Unix/Linux/Mac : ~/.terraformrc

credentials "<account>.scalr.io" {
  token = "<user-token>"
}

Backend Terraform Configuration

  1. To enable the backend configuration, you’ll need the Organization ID, which can be found on the main environment dashboard:

    _images/org_id.png
  2. Next, add the terraform block to the Terraform configuration. The hostname is the URL of your account, enter the Organization ID, and enter the workspace name of your choice:

terraform {
  backend "remote" {
    hostname = "<account>.scalr.io"
    organization = "<id of the environment>"  // e.g. org-t4737mmm538mabc
    workspaces {
      name = "<workspace-name>"
    }
  }
}
  1. Lastly, run terraform init, which will create the workspace. From this point forward, you can use the CLI as you normally would and commands will be executed remotely in Scalr.

If there is an existing state file in the local system or state had previously been stored in another remote backed, then the terraform init command will automatically migrate the state to Scalr. See Migrating to Scalr for more details.

Warning

Version Mismatch. If the workspace is pre-created manually in Scalr and the Terraform version of the workspace does not match the version of the CLI then then following error will be displayed:
Error reading local state: state snapshot was created by Terraform vx.x.x, which is newer than current vx.x.x;.
If you see this error, please ensure the Terraform version of the CLI matches the Terraform version of the workspace.

Set Terraform Variables

Note

Variables can be created as code through the Scalr Terraform Provider.

Terraform variables can be set in the Terraform code or within the Scalr UI. If the configuration contains variables that do not have assigned values, then these must be assigned values in then Scalr workspace via the UI. Scalr will automatically create the variables.

_images/ws_vars_values.png

If the local workspace contains any *.auto.tfvars files these will provide default variable values that Terraform will automatically use.

If variables in the *.auto.tfvars files have the same names as variables specified in the workspace, the predefined workspace values will be used. For map variables the values in *.auto.tfvars are merged with values in the same named variable in the workspace.

Set Shell Variables

Note

Variables can be created as code through the Scalr Terraform Provider.

If the Terraform configuration utilizes shell variables (export var=value), e.g. for credentials and other provider parameters, these must be set in Scalr.

Shell variables can be set at all levels in Scalr and are inherited by lower levels. Use environment or account level for shell variables that are needed in multiple workspaces or environments. Use workspace level for shell variables that are specific to individual Terraform configurations.

It is also possible to use the Scalr provider to pull output from one workspace and post it as an environment or account shell variable to make it easily available to all other workspaces.

_images/ms_var_4.png

See Variables for full details.

Dry Runs

You have the option of running terraform plan, which will execute remotely, but also return results back to your local client. The plan is helpful to understand what to expect when you execute an apply.

_images/cli_plan.png

Runs

All runs for a given workspace will be displayed in the runs tab. CLI runs will be noted as CLI driven with the username of the person who created the run. This serves as a good way to audit the changes that have gone on within a workspace.

_images/cli_runs.png

Supported CLI Commands

Scalr remote backend provides the following support for commands of Terraform CLI. Scalr only supports the CLI for versions >= 0.12.0:

CLI command

Scalr

apply

console

destroy

fmt

get

graph

import

init

output

plan

providers

show

state

taint

untaint

validate

version

workspace

Video

More of a visual learner? Check out this feature on YouTube NEWWIN.


Module Registry Workspace

Scalr workspaces can also be created by selecting a module from the private module registry. This is a good option for users who prefer a UI and Terraform resources that need to be consistently deployed using versioned modules that are made available by the account administrators. A good example of this is the deployment of a Kubernetes cluster that might have strict standards.

There are three basic steps to get started with a module registry driven workspace:

  1. Ensure a deployable module is available

  2. Create the workspace

  3. Set the variables (if necessary)

Create Workspace

For a module driven workspace, the user is required to select the module and module version, all other fields are optional. If variables are required, the user will be prompted to fill in the variables once the workspace is created.

_images/new_module_ws.png

Once the workspace is deployed, the module will remain the source. Users can update to a new version of the module if required by updating the version in the workspace settings.

Set Terraform Variables

Note

Variables can be created as code through the Scalr Terraform Provider.

Terraform variables can be set in the Terraform code or within the Scalr UI. If the configuration contains variables that do not have assigned values, then these must be assigned values in then Scalr workspace via the UI. Scalr will automatically create the variables.

_images/ws_vars_values.png

If the local workspace contains any *.auto.tfvars files these will provide default variable values that Terraform will automatically use.

If variables in the *.auto.tfvars files have the same names as variables specified in the workspace, the predefined workspace values will be used. For map variables the values in *.auto.tfvars are merged with values in the same named variable in the workspace.

Set Shell Variables

Note

Variables can be created as code through the Scalr Terraform Provider.

If the Terraform configuration utilizes shell variables (export var=value), e.g. for credentials and other provider parameters, these must be set in Scalr.

Shell variables can be set at all levels in Scalr and are inherited by lower levels. Use environment or account level for shell variables that are needed in multiple workspaces or environments. Use workspace level for shell variables that are specific to individual Terraform configurations.

It is also possible to use the Scalr provider to pull output from one workspace and post it as an environment or account shell variable to make it easily available to all other workspaces.

_images/ms_var_4.png

See Variables for full details.

Runs

With module sourced workspaces, the runs are only executed by manually executing the run through the UI, API, or Scalr provider. Updates to the existing module will not trigger a new run for existing workspaces linked to that module version.

_images/module_runs.png

Workspace Settings

Custom Hooks

Custom hooks are used to customize the core Terraform workflow. It is a common requirement to have to run a command, script, or API call before or after the Terraform plan and/or apply events. For example, many customers run lint tests before the plan to ensure the Terraform code is formatted correctly or install software before the apply if it is needed for the Terraform code to execute correctly.

If a command is being used in the hook, nothing else is needed except typing the command into the text box, shell variables can be referenced if required.

If a script is being used, ensure that the script is being uploaded as part of the configuration files with the Terraform code. Optionally, the script can also be downloaded (i.e. wget https://script.sh | sh ).

Custom hooks are added as part of the workspace creation or after a workspace is created by going into the workspace settings:

_images/pre_post_hooks.png

The output of the hooks can be seen directly in the console output for the plan and apply:

_images/pre_post_results.png

Built-In Variables

The following shell variables are built into the runtime environment for use as needed:

  • SCALR_RUN_ID - The ID of the current run.

  • SCALR_HOSTNAME - The Scalr hostname.

  • SCALR_TERRAFORM_OPERATION - The current Terraform operation (plan or apply).

  • SCALR_TERRAFORM_EXIT_CODE - The exit code (0 or 1) of the previous operation (plan or apply) and only avilable in after hooks.

See the full documention for variables here: Variables

Hook Examples

Pulling Plan Details

In the case where the Terraform plan might need to be exported and used externally, it can be pulled by using the command below before the plan or apply, or after apply:

terraform show -json /opt/data/terraform.tfplan.bin

Triggering a Downstream Workspace

The following is a basic script meant for example purposes only, the script will execute a run in another workspace based on the current run apply being successful:

Note

SCALR_TOKEN does not have enough permissions to execute a run, you should replace the default token with your own token.

#/bin/sh

# See: https://docs.scalr.com/en/latest/api/preview/runs.html#create-a-run
trigger_run() {
curl -X POST --silent "https://${SCALR_HOSTNAME}/api/iacp/v3/runs" \
-H "Content-Type: application/vnd.api+json" \
-H "Authorization: Bearer ${SCALR_TOKEN}" \
-H "Prefer: profile=preview" \
--data-binary @- << EOF
{
"data": {
  "type": "runs",
  "attributes": {
      "message": "Triggered by ${SCALR_RUN_ID}"
  },
  "relationships": {
    "workspace": {
      "data": {
        "id": "${DOWNSTREAM_WORKSPACE_ID}",
        "type": "workspaces"
      }
     }
    }
   }
  }
 EOF
}

if [ "${SCALR_TERRAFORM_OPERATION}" = "apply" ] && [ "${DOWNSTREAM_WORKSPACE_ID}" != "" ]; then
  if [ "${SCALR_TERRAFORM_EXIT_CODE}" = "0" ]; then
      echo "Terraform Apply was successful. Triggering downstream Run in ${DOWNSTREAM_WORKSPACE_ID}..."
      trigger_run
  else
      echo "Terraform Apply failed."
  fi
fi

API Call to Slack

Here is an example of sending the run ID to Slack after an apply. The token variable is set in the shell variables and the SCALR_RUN_ID is built into the runtime environment.

curl -X POST -H 'Content-type: application/json' --data "{'text':'The run ${SCALR_RUN_ID} has completed'}" https://hooks.slack.com/services/${token}

Container Image Info

Regardless of the workspace type, Terraform runs occur within a Docker container that is running on the Scalr infrastructure. The container is based on standard Alpine Linux and has the tools below installed already. If you need to execute runs outside of the Scalr infrastructure, you can do this through Self Hosted Agent Pools.

The following tools are already installed on the image:

Name

Description

AWS CLI

Used to interact with AWS.

Azure CLI

Used to interact with Azure. See setup instructions below.

Google CLI

Used to interact with Google Cloud. See setup instructions below.

AWS CLI:

Nothing is needed as the AWS CLI can read the $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY shell variables that Scalr passes.

Azure CLI:

az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
az account set --subscription=$ARM_SUBSCRIPTION_ID

Google CLI:

echo $GOOGLE_CREDENTIALS > key.json
gcloud auth activate-service-account --key-file=key.json > /dev/null 2>&1
rm -f key.json
gcloud config set project $GOOGLE_PROJECT > /dev/null 2>&1