Thank you! Your submission has been received!
Oops! Something went wrong.
Terraform is an Infrastructure-as-Code (IaC) tool developed by HashiCorp that allows you to define, provision, and manage cloud resources programmatically. Using declarative configuration files, you can describe the desired state of your infrastructure, and Terraform will ensure the infrastructure matches that state. It enables consistent, repeatable, and automated deployment of cloud resources.
In this blog, we will learn how to create Terraform files and manage them in GCP for easier access.
In this file, define Google Cloud credentials:
provider "google" {
project = var.project
region = var.region
zone = var.zone
credentials = var.GOOGLE_CREDENTIALS
}
Make sure to define the environment variables in the Terraform console.
In this file, define resources needed for your project:
# Create a VPC
resource "google_compute_network" "network" {
name = var.vpc_name
auto_create_subnetworks = false
}
# Create a Subnet
resource "google_compute_subnetwork" "subnet" {
name = var.subnet_name
ip_cidr_range = var.subnet_cidr
region = var.region
network = google_compute_network.network.id
}
# Create a GKE Autopilot Cluster
resource "google_container_cluster" "primary" {
name = var.gke_name
location = var.region
initial_node_count = 1
enable_autopilot = true
deletion_protection = false
project = var.project
network = google_compute_network.network.id
subnetwork = google_compute_subnetwork.subnet.id
node_config {
service_account = var.gke_service_account
}
release_channel {
channel = "REGULAR"
}
}
# Service Account Creation Details
resource "google_service_account" "gke_sa" {
account_id = var.gke_service_account
display_name = var.gke_service_account_display_name
}
# IAM Binding to assign Kubernetes Engine Admin role to the service account
resource "google_project_iam_member" "gke_sa_k8s_admin" {
project = var.project
role = "roles/container.admin"
member = "serviceAccount:${google_service_account.gke_sa.email}"
}
# Create Google Cloud Storage bucket
resource "google_storage_bucket" "bucket" {
name = var.bucket_name
location = var.region
force_destroy = true
lifecycle_rule = var.gcs_lifecycle_rule
}
# Create Pub/Sub Topic
resource "google_pubsub_topic" "topic" {
name = var.pubsub_topic_name
}
# Create VM Instance
resource "google_compute_instance" "vm_instance" {
name = var.vm_name
machine_type = var.vm_machine_type
zone = var.zone
labels = var.vm_labels
tags = var.vm_network_tags
boot_disk {
initialize_params {
image = "ubuntu-2204-lts"
size = 100
}
}
network_interface {
network = google_compute_network.network.id
subnetwork = google_compute_subnetwork.subnet.id
access_config {
// Include this empty block to attach an external IP to the instance
}
}
}
In this file, define the variables used:
variable "project" {
description = "The GCP project ID"
type = string
default = "test-staging"
}
variable "GOOGLE_CREDENTIALS" {
description = "The credentials for the Google Service Account"
type = string
sensitive = true
}
variable "region" {
description = "The GCP region"
type = string
default = "us-east4"
}
variable "zone" {
description = "The GCP zone"
type = string
default = "us-east4-a"
}
variable "vpc_name" {
description = "The name of the VPC"
type = string
default = "test-staging"
}
variable "subnet_name" {
description = "The name of the subnet"
type = string
default = "staging-subnet1"
}
variable "subnet_cidr" {
description = "The CIDR range of the subnet"
type = string
default = "10.121.10.0/9"
}
variable "vm_name" {
description = "The name of the VM instance"
type = string
default = "test-controller"
}
variable "vm_machine_type" {
description = "The machine type of the VM instance"
type = string
default = "n2d-standard-8"
}
variable "vm_labels" {
description = "Labels to apply to the VM instance"
type = map(string)
default = {
"environment" = "dev",
"team" = "development"
}
}
variable "vm_network_tags" {
description = "Network tags to apply to the VM instance"
type = list(string)
default = ["dev"]
}
# Artifact Registry name
variable "artifact_registry_name" {
description = "Name of the Artifact Registry repository"
type = string
default = "test-repository"
}
# Artifact Registry format (e.g., DOCKER, MAVEN, NPM)
variable "artifact_registry_format" {
description = "The format of the Artifact Registry repository"
type = string
default = "DOCKER"
}
variable "gke_name" {
description = "The name of the GKE cluster"
type = string
default = "test-staging-cluster"
}
variable "gke_service_account" {
description = "The name of the Kubernetes Service Account"
type = string
default = "test-gke-ksa"
}
variable "gke_service_account_display_name" {
description = "Display name for the service account"
type = string
default = "test-gke-ksa"
}
variable "bucket_name" {
description = "The name of the Google Cloud Storage bucket"
type = string
default = "test-bucket"
}
variable "gcs_lifecycle_rule" {
description = "Lifecycle rules for GCS bucket"
type = list(any)
default = [
{
action = { type = "Delete" }
condition = { age = 90 }
}
]
}
variable "pubsub_topic_name" {
description = "Name of the Pub/Sub topic"
type = string
default = "test-pubsub-staging"
}
Whenever the repositories are updated from GitHub, the actions will be deployed in the Terraform runs section, and you can validate the process there.
Once the run is completed, verify GCP resources are created in the GCP console.
Terraform simplifies infrastructure management by providing:
By using Terraform, you can seamlessly manage complex GCP infrastructures while reducing operational overhead and enhancing reliability.
Traditional storage systems often struggle to scale dynamically. High-performance applications like AI, media processing, and data analytics demand low-latency storage with fast read/write speeds, which many legacy systems fail to deliver.
As organizations increasingly move to the cloud, the risk of cyber threats, including DDoS attacks and web vulnerabilities, continues to rise. Protecting cloud-based applications is essential to ensure service availability, safeguard sensitive data, and meet regulatory compliance.
Many organizations continue to migrate to the cloud, securing access to resources has become a critical priority, traditional security models, which rely on perimeter-based defences are no longer sufficient in the modern cloud landscape.