pre-loading
backtotop
Infrastructure Modernization
Infrastructure Modernization

How to Use Terraform for Effective GCP Resource Management

December 27, 2024

Terraform is an Infrastructure-as-Code (IaC) tool developed by HashiCorp that allows you to define, provision, and manage cloud resources programmatically. Using declarative configuration files, you can describe the desired state of your infrastructure, and Terraform will ensure the infrastructure matches that state. It enables consistent, repeatable, and automated deployment of cloud resources.

Benefits of Using Terraform:

  1. Multi-Cloud Support: Works seamlessly across various cloud platforms.
  2. Declarative Language: Describe "what" you want to achieve, and Terraform determines "how" to achieve it.
  3. Version Control: Configuration files can be version-controlled using Git, enabling collaboration.
  4. Repeatability: Easily replicate infrastructure for different environments (e.g., dev, test, production).
  5. Infrastructure Drift Detection: Detects and reconciles changes made outside of Terraform.
  6. Cost-Effective: Automates resource provisioning, minimizing human error and saving time.

Prerequisites:

  • Active Google Cloud Project
  • Enable required APIs in the GCP project
  • Service Account and JSON key
  • Terraform Console with workspace
  • GitHub repositories (CI/CD)

In this blog, we will learn how to create Terraform files and manage them in GCP for easier access.

1. Create Provider File (providers.tf)

In this file, define Google Cloud credentials:

provider "google" {

 project     = var.project

 region      = var.region

 zone        = var.zone

 credentials = var.GOOGLE_CREDENTIALS

}

Make sure to define the environment variables in the Terraform console.

2. Create Main File (main.tf)

In this file, define resources needed for your project:

# Create a VPC

resource "google_compute_network" "network" {

 name                    = var.vpc_name

 auto_create_subnetworks  = false

}

# Create a Subnet

resource "google_compute_subnetwork" "subnet" {

 name          = var.subnet_name

 ip_cidr_range = var.subnet_cidr

 region        = var.region

 network       = google_compute_network.network.id

}

# Create a GKE Autopilot Cluster

resource "google_container_cluster" "primary" {

 name                = var.gke_name

 location            = var.region

 initial_node_count  = 1

 enable_autopilot    = true

 deletion_protection = false

 project             = var.project

 network             = google_compute_network.network.id

 subnetwork          = google_compute_subnetwork.subnet.id

 node_config {

   service_account = var.gke_service_account

 }

 release_channel {

   channel = "REGULAR"

 }

}

# Service Account Creation Details

resource "google_service_account" "gke_sa" {

 account_id   = var.gke_service_account

 display_name = var.gke_service_account_display_name

}

# IAM Binding to assign Kubernetes Engine Admin role to the service account

resource "google_project_iam_member" "gke_sa_k8s_admin" {

 project = var.project

 role    = "roles/container.admin"

 member  = "serviceAccount:${google_service_account.gke_sa.email}"

}

# Create Google Cloud Storage bucket

resource "google_storage_bucket" "bucket" {

 name             = var.bucket_name

 location         = var.region

 force_destroy    = true

 lifecycle_rule   = var.gcs_lifecycle_rule

}

# Create Pub/Sub Topic

resource "google_pubsub_topic" "topic" {

 name = var.pubsub_topic_name

}

# Create VM Instance

resource "google_compute_instance" "vm_instance" {

 name         = var.vm_name

 machine_type = var.vm_machine_type

 zone         = var.zone

 labels       = var.vm_labels

 tags         = var.vm_network_tags

 boot_disk {

   initialize_params {

     image = "ubuntu-2204-lts"

     size  = 100

   }

 }

 network_interface {

   network    = google_compute_network.network.id

   subnetwork = google_compute_subnetwork.subnet.id

   access_config {

     // Include this empty block to attach an external IP to the instance

   }

 }

}

3. Create Variables File (variables.tf)

In this file, define the variables used:

variable "project" {

 description = "The GCP project ID"

 type        = string

 default     = "test-staging"

}

variable "GOOGLE_CREDENTIALS" {

 description = "The credentials for the Google Service Account"

 type        = string

 sensitive   = true

}

variable "region" {

 description = "The GCP region"

 type        = string

 default     = "us-east4"

}

variable "zone" {

 description = "The GCP zone"

 type        = string

 default     = "us-east4-a"

}

variable "vpc_name" {

 description = "The name of the VPC"

 type        = string

 default     = "test-staging"

}

variable "subnet_name" {

 description = "The name of the subnet"

 type        = string

 default     = "staging-subnet1"

}

variable "subnet_cidr" {

 description = "The CIDR range of the subnet"

 type        = string

 default     = "10.121.10.0/9"

}

variable "vm_name" {

 description = "The name of the VM instance"

 type        = string

 default     = "test-controller"

}

variable "vm_machine_type" {

 description = "The machine type of the VM instance"

 type        = string

 default     = "n2d-standard-8"

}

variable "vm_labels" {

 description = "Labels to apply to the VM instance"

 type        = map(string)

 default     = {

   "environment" = "dev",

   "team"        = "development"

 }

}

variable "vm_network_tags" {

 description = "Network tags to apply to the VM instance"

 type        = list(string)

 default     = ["dev"]

}

# Artifact Registry name

variable "artifact_registry_name" {

 description = "Name of the Artifact Registry repository"

 type        = string

 default     = "test-repository"

}

# Artifact Registry format (e.g., DOCKER, MAVEN, NPM)

variable "artifact_registry_format" {

 description = "The format of the Artifact Registry repository"

 type        = string

 default     = "DOCKER"

}

variable "gke_name" {

 description = "The name of the GKE cluster"

 type        = string

 default     = "test-staging-cluster"

}

variable "gke_service_account" {

 description = "The name of the Kubernetes Service Account"

 type        = string

 default     = "test-gke-ksa"

}

variable "gke_service_account_display_name" {

 description = "Display name for the service account"

 type        = string

 default     = "test-gke-ksa"

}

variable "bucket_name" {

 description = "The name of the Google Cloud Storage bucket"

 type        = string

 default     = "test-bucket"

}

variable "gcs_lifecycle_rule" {

 description = "Lifecycle rules for GCS bucket"

 type        = list(any)

 default     = [

   {

     action    = { type = "Delete" }

     condition = { age = 90 }

   }

 ]

}

variable "pubsub_topic_name" {

 description = "Name of the Pub/Sub topic"

 type        = string

 default     = "test-pubsub-staging"

}

Whenever the repositories are updated from GitHub, the actions will be deployed in the Terraform runs section, and you can validate the process there.

Once the run is completed, verify GCP resources are created in the GCP console.

Conclusion

Terraform simplifies infrastructure management by providing:

  • A consistent way to define and provision resources.
  • Automation that eliminates manual errors.
  • Scalability and portability across different environments.

By using Terraform, you can seamlessly manage complex GCP infrastructures while reducing operational overhead and enhancing reliability.

More Blogs

Setting Up a Landing Zone in GCP: A 10-Point Checklist for Seamless Onboarding
Setting Up a Landing Zone in GCP: A 10-Point Checklist for Seamless Onboarding
Tue, May 25th 2021 8:04 AM

The first step to success in the cloud is establishing a well-architected landing zone. This serves as a foundational layer, ensuring governance, security, and efficiency. Setting up a landing zone properly can significantly ease the onboarding of new customers.

Read more 
External link
Stay on Top of Your Costs with Detailed Tracking from Looker Studio at No Cost
Stay on Top of Your Costs with Detailed Tracking from Looker Studio at No Cost
Tue, May 25th 2021 8:04 AM

Cloud costs can quickly escalate if not properly managed. Detailed tracking and visualization of your cloud spend can help your business stay agile and efficient. With Looker Studio, you can gain deep insights into your Google Cloud costs, identify spending patterns, and make informed financial decisions

Read more 
External link
Before Disaster Strikes: Why Dockerizing Your Legacy App in GCP is a Must
Before Disaster Strikes: Why Dockerizing Your Legacy App in GCP is a Must
Tue, May 25th 2021 8:04 AM

Legacy applications are the silent killers of business efficiency. They’re monolithic, slow, and nearly impossible to scale. Every code update feels like a risk, with downtime looming like a dark cloud over your operations.

Read more 
External link
Go back