What Is Terraform?
Terraform is an IAC tool, used to automate various infrastructure tasks. The provisioning of cloud resources is one of the main use cases of Terraform. It’s an open-source provisioning tool written in the Go language and created by HashiCorp.
Terraform allows you to describe your complete infrastructure in the form of code. Even if your servers come from different providers such as AWS or Azure, terraform helps you build and manage these resources in parallel across multiple providers.
Benefits of Infrastructure-as-Code are Speed, Simplicity, Error Reduction and Team Collaboration, Disaster Recovery and Enhanced Security

We can deploy Resources in different ways, here we can target with VS code Dev Container and Docker demon.
Steps to be follow to Setup VS Code Dev Container with Docker Engine:
Step1: Install Docker Desktop [from here](https://www.docker.com).
Step2: Install VS Code [from here](https://code.visualstudio.com).
Step3: Download wsl_update_x64 package and install where we installed Docker and VS Code.
Step4: To install wsl follow this link. https://learn.microsoft.com/en-us/windows/wsl/install-manual.
Step5: In VS Code, install the * Dev Containers and Remote Development* extension from Microsoft.

Step6: Open the infrastructure source code folder Which is cloned from the Plans-Terraform repository. Or we can write the Infra Code along with devcontainer.json in Visual Studio.

Create a devcontainer.json, which describes how VS Code should start the container and what to do after it connects.
To Create devcontainer.json , first we should create folder called .devcontainer , in this folder we can create devcontainer.json and env specific files like devcontainer.dev.env.
In devcontainer.json , will define the below configuration details        

{
“image”: “mcr.microsoft.com/vscode/devcontainers/typescript-node:0-18”,

“name”: “My-Project”,

“features”: {
“ghcr.io/devcontainers/features/azure-cli:1”: {},
“ghcr.io/devcontainers/features/terraform:1”: {},
“ghcr.io/devcontainers/features/git:1”: {},
“ghcr.io/devcontainers/features/sshd:1”: {}
},
“runArgs”: [“–init”, “–env-file”,”.devcontainer/devcontainer.dev.env”],
“remoteEnv”: {
// Sets environment variables required for terrafom remote backend
“TF_B_SUBSCRIPTION_ID”: “${containerEnv:TF_B_SUBSCRIPTION_ID}”,
“TF_B_RESOURCE_GROUP” : “${containerEnv:TF_B_RESOURCE_GROUP}”,
“TF_B_LOCATION”: “${containerEnv:TF_B_LOCATION}”,
“TF_B_STORAGE_ACCOUNT” : “${containerEnv:TF_B_STORAGE_ACCOUNT}”,
“TF_B_CONFIG_FILE_PATH” : “${containerEnv:TF_B_CONFIG_FILE_PATH}”,
“TF_PLAN_VARIABLES_FILE_PATH” : “${containerEnv:TF_PLAN_VARIABLES_FILE_PATH}”,
“TF_PR_SUBSCRIPTION_ID” : “${containerEnv:TF_PR_SUBSCRIPTION_ID}”
},
“customizations”: {
“vscode”: {
“extensions”: [“dbaeumer.vscode-eslint”]
}
},
“postCreateCommand”: “sudo chmod +x ./utils/*; sudo cp ./utils/* /usr/local/bin “,
“forwardPorts”: [3000]
}

        3.1) we can use an image as a starting point for your devcontainer.json. An image is like a mini-disk drive    with  various tools and an operating system pre-installed. You can pull images from a container registry, which is a  collection of repositories that store images.

        3.2) In the “feature block”, we can define the tools that are required to perform the terraform tasks.

        3.3) in “runArgs” pass the arguments.

        3.4) in “remoteEnv” we can set the variables that are needed when container running.

        3.5) install VS code extensions.

        3.6) set postCreateCommand and forwardPorts [3000] once the container created

4. Create a dev.env file, in this we should define the variables for subscription, where the backed storage account is located, resource group where the backed storage account should be created, Region where the resource group should be Created. And define the name of the storage account and environment specific configuration.

# the id of the subscription where the backend storage account is located
# Data Core Management
TF_SUBSCRIPTION_ID=**********************

# the name of the resource group where the storage account for the backend should be created
TF_B_RESOURCE_GROUP=MY-RG

# the region where the resource group for the backend should be created
TF_B_LOCATION=eastus

# the name of the storage account for the backend
TF_B_STORAGE_ACCOUNT=mytfsa2

# environment specific configuration
TF_B_CONFIG_FILE_PATH=./backend/my.dev.tfconfig
TF_PLAN_VARIABLES_FILE_PATH=./env/my.dev.tfvars
TF_PR_SUBSCRIPTION_ID=**********************

5. Create one more devcontainer for the test environment to target test environment. Do the same thing for other environments too.

6. Create a Backend folder and create. tfconfig files(backend/my.dev.tfconfig) based on the environments and specify the backed details in that file.

subscription_id = “*************”
resource_group_name = “MY-RG”
storage_account_name = “mytfsa2”
container_name = “terraform”
key = “dev.terraform.tfstate”

7. Create an Env folder and create. tfvars files (env/ dev.tfvars) with respective environments and specify the environment details.

location = “eastus”
subscription_id = “***************”

tags = {
“my:environment” = “dev”
“my:department” = “**”
“my:owner” = “*****@mail.com”
“my:deployment” = “terraform”
}
environment = “dev”

In this file we can add the connection string details and url’s and other values which are needed to the specific environment if they are not common for all env’s.

8. Create a Module Folder and Create a Child Modules for Resources, like if you want to create an app Service, we need service plan resource to deploy web app. So, we should create two child modules folders for service plan and for web app.
9. In those folders we need to create main.tf, varaibales.tf and output.tf configuration files based on resource Creation.
10. In this article we are going to deploy an Azure App service so we can see how we can deploy these using TF From Local.
11. We created the App Service plan folder under the module and written main.tf, variables.tf and output.tf file under the app service plan folder. Main.tf contains resource creation details and variables.tf file contains the names and values (re-usable code) which are passing to main.tf and output.tf contains output values of resource.
main.tf

resource “azurerm_service_plan” “App_plan” {
name = var.name
resource_group_name= var.resource_group_name
location = var.location

zone_balancing_enabled = var.zone_balancing_enabled
os_type = var.os_type
sku_name = var.sku_name
tags = var.tags
}

 

Variables.tf

variable “name” {
type = string
description = “App Service plan name”
}

variable “resource_group_name” {
type = string
description = “App service Resource group name”
}

variable “location” {
type = string
description = “app service location”
}

variable “per_site_scale_enabled” {
type = bool
default = false
description = “Should per site scaling be enabled”
}

variable “zone_balancing_enabled” {
type = bool
default = false
description = “Should zone balancing to be enabled”
}

variable “os_type” {
type = string
description = “app service plan OS type”
}

variable “sku_name” {
type = string
description = “app service sku name select based on requirement”
}

variable “tags” {
type = map(string)
description = “my tags”
default = {}
}

12. Next, We Need to Create a Folder for Windowsappservices folder under the module and written main.tf, variables.tf and output.tf file Windowsappservices folder.

main.tf

resource “azurerm_windows_web_app” “demo_webapp” {
name = var.name
resource_group_name = var.resource_group_name
location = var.location
service_plan_id = var.service_plan_id

https_only = false

site_config {
vnet_route_all_enabled = false
always_on = false

application_stack {
current_stack = “dotnet”
dotnet_version = var.dotnet_version
}

}

app_settings = {
“ASPNETCORE_DETAILEDERRORS” = “true”
}
tags= merge(var.tags,
{
“myproject:resource” = var.name
})
}

variables.tf

variable “name” {
type = string
description = “App Service name”
}

variable “resource_group_name” {
type = string
description = “App Service resource group”
}

variable “location” {
type = string
description = “App Servicelocation”
}

variable “service_plan_id” {
type = string
description = “Id of the service plan hosting the app service”
}

variable “dotnet_version” {
type = string
description = “The version of the .NET stack to use”
default = “v6.0”
}

variable “tags” {
type = map(string)
description = “mytags”
default = {}
}

13. Add/Create a Locals.tf file , in this file we will write the resources Name and which can be assigned and used in our code.
locals.tf

locals {
appservice_plan_name = join(“-“, compact([var.resource_prefix, var.location, var.environment, var.resource_slug, “demo”, “asp” ]))
appservice_name = join(“-“, compact([var.resource_prefix, var.location, var.environment, var.resource_slug, var.resource_prefix, “as2”]))
resource_group_name = “MY-RG”
}

14. Create a main.tf file in root directory called root module, in this file we can define resource blocks for resources which you want to deploy, or we can define module blocks for the resource creation and call the child modules and assign the values.
main.tf

data “azurerm_client_config” “current” {}

data “azurerm_resource_group” “k8_rg” {
name = local.resource_group_name

}

module “app_service_plan” {
source = “./modules/AppServiceplan”
name = local.appservice_plan_name
resource_group_name = data.azurerm_resource_group.k8_rg.name
location = var.location
os_type = “Windows”
sku_name = “F1”
tags = var.tags
}

module “demo_webapp” {
source = “./modules/windowsAppServices/demoappservice”
name = local.appservice_name
resource_group_name = data.azurerm_resource_group.k8_rg.name
location = var.location
service_plan_id = module.app_service_plan.app_service_planId
tags = var.tags
}

15. Create providers.tf file, we should write provider block in this file with respective the Cloud Provider and this will allow us to interact with the Azure Cloud provider Since we are working on Azure Cloud.
providers.tf

terraform {
required_providers {
azurerm = {
source = “hashicorp/azurerm”
version = “>=3.12.0”
}
}
backend “azurerm” {}
}

provider “azurerm” {
features {}

16. Create a utils folder and create .sh files to execute the terraform commands. We can write script for login and access storage account and calling and set subscription and config file paths.

Configure.sh

#!/bin/bash
az login >> /dev/null
echo “Setting backend subscription id to $TF_B_SUBSCRIPTION_ID”
az account set –subscription $TF_B_SUBSCRIPTION_ID
az account show
echo “Setting state storage account access key”
ACCOUNT_KEY=$(az storage account keys list –resource-group $TF_B_RESOURCE_GROUP –account-name $TF_B_STORAGE_ACCOUNT –query ‘[0].value’ -o tsv)
ARM_ACCESS_KEY=$ACCOUNT_KEY
echo “ARM_ACCESS_KEY set to $ARM_ACCESS_KEY”

init.sh

#!/bin/bash
echo “Initializing local state”
echo “Setting backend subscription id to $TF_B_SUBSCRIPTION_ID”
az account set –subscription $TF_B_SUBSCRIPTION_ID
az account show
terraform init –backend-config=$TF_B_CONFIG_FILE_PATH –migrate-state
echo “Terraform ready to provision.”

Plan.sh

#!/bin/bash
echo “Setting provisioning subscription id to $TF_PROVISION_SUBSCRIPTION_ID”
#az account set –subscription “baa4ddad-a723-40a0-929d-0c7bc9f2a3f5”
az account set –subscription $TF_PR_SUBSCRIPTION_ID
az account show
terraform plan –var-file=$TF_PLAN_VARIABLES_FILE_PATH –out tfplan

apply.sh

#!/bin/bash
echo “Setting provisioning subscription id to $TF_PR_SUBSCRIPTION_ID”
az account set –subscription $TF_PR_SUBSCRIPTION_ID
az account show
terraform apply tfplan

Step 7: Deploy: Within the container shell, use the following scripts to initialize terraform, create plans, and apply them.

1) Run the . ./utils/configure.sh. This logs you into the Azure CLI and gets/sets an access token for the storage account in Azure that contains the Terraform state.
2) Run the . ./utils/init.sh to sync up your local environment with the remote state.
3) Run . ./utils/plan.sh to generate a terraform execution plan. This will preview the output like which resources can be add or update or remove.

4) Apply it by using . ./utils/apply.sh this will apply changes

5) After the Terraform commands, execution completed verify the output in console and also Check changes are updated or not in Azure Cloud.