Apply for invite to IaCP

Workspace Management


A Scalr workspace is the object in which you can manage your Terraform deployments whether it originated from the template registry, API, CLI, or VCS provider. Within a workspace you can do the following:

  • View the history and details of runs

  • View resources

  • Manage variables

  • Queue new runs

  • Destroy infrastructure

  • Edit existing workspace configuration

Each workspace will store the Terraform state and act as what would normally be a directory if you were running Terraform locally.


All Terraform runs will occur on on a container within the Scalr server. The container is running Debian v0.13.0-rc1 by default.

Creating a Workspace

Workspaces only need to be created manually if you wish to use devops automation.

  • A workspace will automatically be created for you if you make a request via the template registry.

  • Workspaces named (name = "ws-name") in the backend configuration for CLI runs are created automatically when terraform init is run.

  • When the workspace prefix is used (prefix = "ws-prefix-") in the backend configuration for CLI runs a workspace is created using terraform workspace new <ws_name>.

Workspaces for Devops automation are created through the workspace page by clicking new workspace:


Then link it to your VCS provider:



Terraform working directory NEWWIN provides and alternative directory were the terraform command will run. Defaults to the root directory or the subdirectory if specified.


If you are creating a workspace and integrating it with your existing CI/CD pipeline, you may prefer to store the variables directly in Scalr rather than the Terraform template to make the template more dynamic.

When you create a workspace that is linked to a VCS repository IaCP will scan the template for any input variables that dont have default values as in this example.

variable "cluster_name" {
type    = string

variable "region" {
description = "The AWS Region to deploy in"
type        = string

variable "instance_type" {
description = "Instance type for the cluster nodes"
default     = "t3.medium"
type        = string

variable "key_name" {
description = "The name of then public SSH key to be deployed to the servers. This must exist in AWS already"
type        = string

variable number_of_azs {
description = "Number of availability_zones to deploy to, and therefore minimum number of desired worker nodes"
default     = 2
type        = string

variable minimum_nodes {
description = "Minimum number of worker nodes, must be <= number_of_azs"
default     = 1
type        = string

IaCP will provide a warning on the workspace dashboard.


Go to the workspace tab to set values. The variables with no value will have been added automatically.


Add the values as required.


A blank value means empty string ""

Find out more about variables here.


Runs occur when you queue a new plan or execute changes on existing workspaces. Each time a run is executed it will create history about the run, which can be found by clicking on the name of the workspace, then the “runs” tab.


To find out more information about each run, click on the “run id”. Each tab within the run can be clicked on to see more detail about the Terraform plan, cost estimate, policy, and apply:

Plan - Displays the Terraform plan that will be executed:


Cost Estimate - Displays the cost that will be incurred or the proposed cost difference if it is a change to existing state:


Policy Check - Displays the results of the OPA policy checks:


Apply - Displays the results and output of the apply:


Clicking on the “commit” ID will redirect you to the VCS provider to show the last change the occurred in the Terraform template.

Canceling a Run

A run can be cancelled using the Cancel button on the run page.


The cancel action can be used to interrupt a run that is currently planning or applying. Performing a cancel is roughly equivalent to hitting ctrl+c during a Terraform plan or apply on the CLI. The running Terraform process is sent an INT signal, which instructs Terraform to end its work and wrap up in the safest way possible.

This endpoint queues the request to perform a cancel; the cancel might not happen immediately. After canceling, the run is completed and later runs can proceed.


Automated Deployment

Scalr workspaces can be bound to a VCS based repo branch [+ directory] so that the template that is being executed by Scalr is always from the latest commit. This binding is defined when a workspace is created and cannot be changed.

Under workspaces click new workspaces


The effect of this binding is that every commit or merge to the linked repo and branch will trigger a full run. This run may deploy or re-deploy resources depending on the changes in the commit and the current state of the resources.

This automated run is achieved as follows.

When the workspace is created Scalr will create a webhook in the VCS to be fired back to Scalr whenever there is a commit in that repo. There is only one webhook per repo and Scalr will process each webhook request and determine if the commit applies to a branch that is linked to a workspace. If so the full run, i.e. terraform apply is triggered. If the plan, cost estimation and policy checks are successful the run will need to be approved in the UI, unless the “auto apply” toggle is set.

VCS Dry Runs

Linking a VCS branch to a workspace can also cause automated testing of a pull request to take place. These automated tests are known as “dry runs” and in the context of Scalr they include the plan, cost estimation and policy phases of a run. They never include the apply phase. The purpose of these tests is to provide some validation that the pull request can be merged into the target branch. These tests are triggered by the webhook and occur under the following circumstances.

  • When a pull request is made to a branch that is linked to a workspace

  • When a commit is done in a branch that is the source of an open pull request to a branch that is linked to a workspace.


  • I have a branch called “Dev” linked to a workspace

  • I create a new branch “New_feature” and commit a change

  • I make a pull request request from New_feature to Dev. A speculative run will be triggered.

  • I make another commit to “New_feauture” while the pull request is still open. Another speculative run will be triggered.

The dry runs can be observed in the VCS



The “details” link is a link to the run details inside Scalr. The run will be associated with the workspace that is linked to the target of the pull request but is only accessible from the VCS link. Dry runs do not appear on the run list for a workspace.

Sharing Workspace State


It is common practice to reference outputs from other workspaces so that a Terraform template can make use of resources that have been deployed elsewhere. This is known as “remote state” and accessing remote state is done using the terraform_remote_state data source as shown in this example.

data "terraform_remote_state" "state-1" {
  backend = "remote"

  config = {
    hostname = "<host>"
    organization = "<org_id>"
    workspaces = {
      name = "<workspace name>"

When you include a terraform_remote_state block in your template you can then access any outputs in that remote state.


You can only access the outputs of a remote state. You cannot access the resources in the the remote state directly.

In Scalr you can access the remote of another workspace in exactly the same way by including terraform_remote_state data sources in your template that reference any other workspace in any other Scalr environment that you have access to.


This example shows how to access the remote state for workspace ‘webapp-vpc’ in the ‘marketing’ environment in our hosted Scalr system at For self hosted systems use the URL of your Scalr installation.

  1. Switch to ‘marketing’ environment and get the organization id from the environment switcher on the UI

  2. Add a terraform_remote_state data source to you template as follows

data "terraform_remote_state" "vpc-1" {
  backend = "remote"

  config = {
    hostname = ""
    organization = "org-xxxxxxxxxxx"
    workspaces = {
      name = "webapp-vpc"

You can now reference the outputs from the remote state with constructs like


See remote_state NEWWIN for more details on use remote state.