Workspaces¶
Note
Workspaces can be created as code through the Scalr Terraform Provider.
Introduction¶
A workspace is where all objects related to Terraform managed resources are stored and managed. This is where you can find state, update the Terraform version, manage variables, see run history and much more. Workspaces can be as simple as a single resource or can be a complex monolithic configuration, it all depends on your use case and the directories that the workspace is linked to. A workspace can be integrated with your existing workflow whether it is a GitOps approach through a VCS provider, using the native Terraform CLI, or deploying modules directly in the UI. All workspace types follow the same pipeline:
Plan, which allows users to view the planned creation, update, or destruction of resource through the standard console output or detailed plan view. The details plan will alert users of destructive changes and makes it easier to search the plan when there are many resources affected. It is also the section to check who approved an apply and comments associated with the approval.

Cost Estimate, which will show you the estimated cost for the resources that are being created. This information can be used for writing a policy to check cost .

Policy Check, which is used to check the Terraform plan JSON output against Open Policy Agent policies. This step can be used to enforce you company standards.

Apply, which will actually create, update, or destroy the resources based on the Terraform configuration file.

VCS Integrated Workspace¶
Scalr workspaces can be linked to Terraform configurations held in VCS repositories in order to automate pull request (PR) checks and deployments.
By linking a workspace to a specific branch of the repository a webhook is created in the VCS provider that will POST to Scalr for every PR or commit/merge that affects that branch. Scalr will then automatically perform a “dry run” (terraform plan
) for every PR and a run (terraform apply
) for every commit/merge.
There are two basic steps to get started with a VCS integrated workspace:
Create the workspace
Set the variables (if necessary)
Create Workspace¶
To enable a VCS based workspace, create a workspace linked to a specific repo and branch:

Workspaces can be updated with the following settings:
Branch
The branch of the repository that the workspace should point to.
Terraform Directory
This is where Terraform actually runs.
This directory must be a subdirectory of the top level of the repository or of the subdirectory if specified.
Triggers
If there are multiple folders, other than just the working directory, that need to trigger a run based on a change.
Trigger prefixes require working directory to be set. And that working directory is always a trigger prefix.
At this point, a run can be triggered, but if your configuration requires variables to be set, there will be a prompt for that.
Set Terraform Variables¶
Note
Variables can be created as code through the Scalr Terraform Provider.
Terraform variables can be set in the Terraform code or within the Scalr UI. If the configuration contains variables that do not have assigned values, then these must be assigned values in then Scalr workspace via the UI. Scalr will automatically create the variables.

If the local workspace contains any *.auto.tfvars
files these will provide default variable values that Terraform will automatically use.
If variables in the *.auto.tfvars
files have the same names as variables specified in the workspace, the predefined workspace values will be used. For map variables the values in *.auto.tfvars
are merged with values in the same named variable in the workspace.
Set Shell Variables¶
Note
Variables can be created as code through the Scalr Terraform Provider.
If the Terraform configuration utilizes shell variables (export var=value
), e.g. for credentials or to set values for Terraform input variables with TF_VAR_{variable_name}={value}
(-parallelism, -var-file, etc).
Shell variables can be set at all levels in Scalr and are inherited by lower levels. Use environment or account level for shell variables that are needed in multiple workspaces or environments. Use workspace level for shell variables that are specific to individual Terraform configurations.
It is also possible to use the Scalr provider to pull output from one workspace and post it as an environment or account shell variable to make it easily available to all other workspaces.

See Variables for full details.
CLI Driven Workspace¶
If an existing workflow already exists that utilizes the native Terraform CLI or you just want the flexibility to use it, you can continue to use the Terraform CLI as it is fully supported in Scalr. Scalr will execute the runs in a container in the Scalr backend, but the logs and output will still be sent back to your console.
To utilize Scalr as a remote backed there are a few simple steps:
Obtain an API token by running
terraform login <account-name>.scalr.io
Create a workspace
Execute
terraform init
Create API Token¶
Run terraform login <account-name>.scalr.io
which will. Ensure you update <account-name> to your actual account name:
terraform login <account-name>.scalr.io
Terraform will request an API token for <account-name>.scalr.io using your browser.
If login is successful, Terraform will store the token in plain text in
the following file for use by subsequent commands:
/Users/name/.terraform.d/credentials.tfrc.json
Do you want to proceed?
Only 'yes' will be accepted to confirm.
Enter a value: yes
This will redirect you to the Scalr UI to create the API token. Copy the token and paste it in the command prompt:
---------------------------------------------------------------------------------
Terraform must now open a web browser to the tokens page for docs.scalr.io.
If a browser does not open this automatically, open the following URL to proceed:
https://<account-name>.scalr.io/app/settings/tokens?source=terraform-login
---------------------------------------------------------------------------------
Generate a token using your browser, and copy-paste it into this prompt.
Terraform will store the token in plain text in the following file
for use by subsequent commands:
/Users/name/.terraform.d/credentials.tfrc.json
Token for <account-name>.scalr.io:
Enter a value:
Retrieved token for user [email protected]
---------------------------------------------------------------------------------
Success! Terraform has obtained and saved an API token.
The new API token will be used for any future Terraform command that must make
authenticated requests to <account-name>.scalr.io.
Create Workspace¶
Click on “New Workspace”, give the workspace a name, select “CLI” as the workspace type, and save.
On the workspace dashboard, click on “base backend configuration”, and copy the boiler plate code into your Terraform configuration. This code will automatically generate the hostname, organization id, and workspace name:

terraform {
backend "remote" {
hostname = "<account>.scalr.io"
organization = "<org-id>"
workspaces {
name = "<workspace-name>"
}
}
}
Note: Workspace creation can be done from the CLI instead of the UI by using the same code snippet as above. If the workspace name is not reserved, Scalr will automatically create the new workspace after running terraform init
.
Lastly, from the command line, run
terraform init
. From this point forward, you can use the Terraform CLI as you normally would and commands will be executed remotely in Scalr.
terraform init
Initializing the backend...
Initializing provider plugins...
- Using previously-installed registry.scalr.io/scalr/scalr v1.0.0-rc26
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
If there is an existing state file in the local system or state that was previously stored in another remote backed, then the terraform init
command will automatically migrate the state to Scalr. See Migrating to Scalr for more details.
Warning
Version Mismatch. If the workspace is pre-created manually in Scalr and the Terraform version of the workspace does not match the version of the CLI then then following error will be displayed:
Error reading local state: state snapshot was created by Terraform vx.x.x, which is newer than current vx.x.x;
.
If you see this error, please ensure the Terraform version of the CLI matches the Terraform version of the workspace.
Set Terraform Variables¶
Note
Variables can be created as code through the Scalr Terraform Provider.
Terraform variables can be set in the Terraform code or within the Scalr UI. If the configuration contains variables that do not have assigned values, then these must be assigned values in then Scalr workspace via the UI. Scalr will automatically create the variables.

If the local workspace contains any *.auto.tfvars
files these will provide default variable values that Terraform will automatically use.
If variables in the *.auto.tfvars
files have the same names as variables specified in the workspace, the predefined workspace values will be used. For map variables the values in *.auto.tfvars
are merged with values in the same named variable in the workspace.
Set Shell Variables¶
Note
Variables can be created as code through the Scalr Terraform Provider.
If the Terraform configuration utilizes shell variables (export var=value
), e.g. for credentials or to set values for Terraform input variables with TF_VAR_{variable_name}={value}
(-parallelism, -var-file, etc).
Shell variables can be set at all levels in Scalr and are inherited by lower levels. Use environment or account level for shell variables that are needed in multiple workspaces or environments. Use workspace level for shell variables that are specific to individual Terraform configurations.
It is also possible to use the Scalr provider to pull output from one workspace and post it as an environment or account shell variable to make it easily available to all other workspaces.

See Variables for full details.
Supported CLI Commands¶
Scalr remote backend provides the following support for commands of Terraform CLI. Scalr only supports the CLI for versions >= 0.12.0:
CLI command |
Scalr |
---|---|
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
|
✓ |
Module Registry Workspace¶
Scalr workspaces can also be created by selecting a module from the private module registry. This is a good option for users who prefer a UI and Terraform resources that need to be consistently deployed using versioned modules that are made available by the account administrators. A good example of this is the deployment of a Kubernetes cluster that might have strict standards.
There are three basic steps to get started with a module registry driven workspace:
Create the workspace
Set the variables (if necessary)
Create Workspace¶
For a module driven workspace, the user is required to select the module and module version, all other fields are optional. If variables are required, the user will be prompted to fill in the variables once the workspace is created.

Once the workspace is deployed, the module will remain the source. Users can update to a new version of the module if required by updating the version in the workspace settings.
Set Terraform Variables¶
Note
Variables can be created as code through the Scalr Terraform Provider.
Terraform variables can be set in the Terraform code or within the Scalr UI. If the configuration contains variables that do not have assigned values, then these must be assigned values in then Scalr workspace via the UI. Scalr will automatically create the variables.

If the local workspace contains any *.auto.tfvars
files these will provide default variable values that Terraform will automatically use.
If variables in the *.auto.tfvars
files have the same names as variables specified in the workspace, the predefined workspace values will be used. For map variables the values in *.auto.tfvars
are merged with values in the same named variable in the workspace.
Set Shell Variables¶
Note
Variables can be created as code through the Scalr Terraform Provider.
If the Terraform configuration utilizes shell variables (export var=value
), e.g. for credentials or to set values for Terraform input variables with TF_VAR_{variable_name}={value}
(-parallelism, -var-file, etc).
Shell variables can be set at all levels in Scalr and are inherited by lower levels. Use environment or account level for shell variables that are needed in multiple workspaces or environments. Use workspace level for shell variables that are specific to individual Terraform configurations.
It is also possible to use the Scalr provider to pull output from one workspace and post it as an environment or account shell variable to make it easily available to all other workspaces.

See Variables for full details.
Workspace Settings¶
Custom Hooks¶
Note
Custom hooks are only available on the pro tier. See Scalr pricing here.
Custom hooks are used to customize the core Terraform workflow. It is a common requirement to have to run a command, script, or API call before or after the Terraform plan and/or apply events. For example, many customers run lint tests before the plan to ensure the Terraform code is formatted correctly or install software before the apply if it is needed for the Terraform code to execute correctly.
If a command is being used in the hook, nothing else is needed except typing the command into the text box, shell variables can be referenced if required.
If a script is being used, ensure that the script is being uploaded as part of the configuration files with the Terraform code. Optionally, the script can also be downloaded (i.e. wget https://script.sh | sh
).
Custom hooks are added as part of the workspace creation or after a workspace is created by going into the workspace settings:

The output of the hooks can be seen directly in the console output for the plan and apply:

Built-In Variables¶
The following shell variables are built into the runtime environment for use as needed:
SCALR_RUN_ID
- The ID of the current run.SCALR_HOSTNAME
- The Scalr hostname.SCALR_TERRAFORM_OPERATION
- The current Terraform operation (plan or apply).SCALR_TERRAFORM_EXIT_CODE
- The exit code (0 or 1) of the previous operation (plan or apply) and only avilable in after hooks.
See the full documention for variables here: Variables
Hook Examples¶
Pulling Plan Details
In the case where the Terraform plan might need to be exported and used externally, it can be pulled by using the command below before the plan or apply, or after apply:
terraform show -json /opt/data/terraform.tfplan.bin
Triggering a Downstream Workspace
The following is a basic script meant for example purposes only, the script will execute a run in another workspace based on the current run apply being successful:
Note
SCALR_TOKEN
does not have enough permissions to execute a run, you should replace the default token with your own token.
#/bin/sh
# See: https://docs.scalr.com/en/latest/api/preview/runs.html#create-a-run
trigger_run() {
curl -X POST --silent "https://${SCALR_HOSTNAME}/api/iacp/v3/runs" \
-H "Content-Type: application/vnd.api+json" \
-H "Authorization: Bearer ${SCALR_TOKEN}" \
-H "Prefer: profile=preview" \
--data-binary @- << EOF
{
"data": {
"type": "runs",
"attributes": {
"message": "Triggered by ${SCALR_RUN_ID}"
},
"relationships": {
"workspace": {
"data": {
"id": "${DOWNSTREAM_WORKSPACE_ID}",
"type": "workspaces"
}
}
}
}
}
EOF
}
if [ "${SCALR_TERRAFORM_OPERATION}" = "apply" ] && [ "${DOWNSTREAM_WORKSPACE_ID}" != "" ]; then
if [ "${SCALR_TERRAFORM_EXIT_CODE}" = "0" ]; then
echo "Terraform Apply was successful. Triggering downstream Run in ${DOWNSTREAM_WORKSPACE_ID}..."
trigger_run
else
echo "Terraform Apply failed."
fi
fi
API Call to Slack
Here is an example of sending the run ID to Slack after an apply. The token
variable is set in the shell variables and the SCALR_RUN_ID
is built into the runtime environment.
curl -X POST -H 'Content-type: application/json' --data "{'text':'The run ${SCALR_RUN_ID} has completed'}" https://hooks.slack.com/services/${token}
Container Image Info¶
Regardless of the workspace type, Terraform runs occur within a Docker container that is running on the Scalr infrastructure. The container is based on standard Alpine Linux and has the tools below installed already. If you need to execute runs outside of the Scalr infrastructure, you can do this through Self Hosted Agent Pools.
The following tools are already installed on the image:
Name |
Description |
---|---|
AWS CLI |
Used to interact with AWS. |
Azure CLI |
Used to interact with Azure. See setup instructions below. |
Google CLI |
Used to interact with Google Cloud. See setup instructions below. |
pip3 |
Pip is the package installer for Python. |
AWS CLI:
Nothing is needed as the AWS CLI can read the $AWS_ACCESS_KEY_ID
and $AWS_SECRET_ACCESS_KEY
shell variables that Scalr passes.
Azure CLI:
az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
az account set --subscription=$ARM_SUBSCRIPTION_ID
Google CLI:
echo $GOOGLE_CREDENTIALS > key.json
gcloud auth activate-service-account --key-file=key.json > /dev/null 2>&1
rm -f key.json
gcloud config set project $GOOGLE_PROJECT > /dev/null 2>&1
Run Triggers¶
Run triggers are a way to chain workspaces together. The use case for this is that you might have one or more upstream workspaces that need to automatically kick off a downstream workspace based on a successful run in the upstream workspace. To set a trigger, go to the downstream workspace and set the upstream workspace(s). Now, whenever the upstream workspace has a successful run, the downstream workspace will automatically start a run.

If more than one (up to 20) workspace is added as the upstream, a successful run in any upstream workspace will trigger the downstream workspace run. For example, if two upstream workspaces finish at the exact same time, then the downstream workspace will have two runs queued.
The permissions required for a user to set the triggers are
Downstream workspace requires
workspaces:update
Upstream workspace requires
workspaces: read
If the downstream workspace has auto-apply
enabled, then the apply will automatically occur once the trigger happens. If not, it will wait for an approval.
Run Scheduler¶
The run scheduler is a way to automatically trigger recurring runs based on a defined schedule. The schedule can be set to execute a run every day, on a specific day(s), or using cron. If you want to schedule a one time run, please see Schedule a Run in the Runs section below. The schedule can be created for a plan/apply that updates/creates resources or a destructive run, equivalent to terraform destroy
. The approval of runs will depend on your workspace settings, if the setting has auto-approval set, the run will automatically apply, if not it will wait for manual confirmation before applying. All run schedules will be assigned in the UTC timezone, please convert to your time zone to ensure the runs are scheduled properly.

The most common use case, among others, is for the run scheduler is to ensure the creation and destruction of development workspaces on a specific schedule to avoid unwanted costs.
Run Timeout¶
By default, the run timeout for a workspace is 120 minutes. You can update this timeout to anything between 10 and 720 minutes via the Scalr API or provider. See run-operation-timeout
in the provider documentation. This setting will be added to the UI soon.
.terraformignore¶
To optimize the speed at which Scalr clones the repository where your Terraform config resides or to just ignore files, Scalr will accept a .terraformignore
file. Any files listed in the .terraformignore
will be excluded during the clone operation.
Runs¶
A run is the result of executing the Terraform deployment in a workspace. There are two types of runs, runs that include an apply and runs that exclude an apply, referred to as a dry run. All runs for a given workspace will be displayed in the runs tab. For VCS driven runs, a commit hash is provided, which can be clicked on to help users understand what changes were made to the code prior to the deployment for the entire history of the workspace. CLI runs will be noted as CLI driven with the username of the person who created the run. This serves as a good way to audit the changes that have gone on within a workspace:

Scalr will automatically kick off dry runs when a PR is opened against a branch that is linked to a Scalr workspace or if a user runs terraform plan
through the CLI. If it is a VCS driven run, Scalr will report the checks back into your VCS provider.

Note
VCS driven dry runs are optional and can be enabled or disabled in the workspace settings.
Runs Queue¶
Note
Run concurrency can be increased on the paid tiers. See Scalr pricing here.
The runs queue page serves as a central dashboard for all runs across all workspaces within an environment. From this page, runs can be canceled in bulk or approved/discarded as needed. A use case for the bulk cancellation is to reprioritize runs (i.e. you have an emergency change going in that cannot wait on prior runs to finish).

The permissions to view the runs page can be controlled through environments:read-runs-queue
in the IAM roles.
Target Resources¶
The target
option gives users the ability to focus the Terraform run on a specific resource or set of resources:

After target is checked, select the one or more resources that the run will impact once executed.
Schedule a Run¶
This is different than the Run Scheduler feature within the workspaces settings as this schedule is specific to a single run and set by clicking queue run:

The schedule is set based on your browser timezone. Here are something considerations when using it:
The schedule does not change the run queue order for the workspace, runs are still queued in the order they were created.
The time set for plan/apply should be thought of as the earliest possible time the run can execute. The run could execute later depending on existing runs in the workspace that are in queue or running.
To avoid misuse of this feature or possible drift, only one scheduled run can be set per workspace.