Infrastructure as Code Pipeline: Architecture with Terraform, GitHub Actions, and OCI Always Free Resources
Learn how to build a production-ready Infrastructure as Code pipeline using Terraform, GitHub Actions, and Oracle Cloud Infrastructure (OCI) Always Free resources. This comprehensive guide covers everything from setup to deployment.
Introduction
Infrastructure as Code (IaC) has revolutionized how we manage cloud infrastructure. By treating infrastructure configuration as code, we gain version control, automated deployments, consistency, and reproducibility. In this article, I’ll walk you through building a complete IaC pipeline using Terraform, GitHub Actions, and Oracle Cloud Infrastructure (OCI) Always Free resources.
Architecture Overview
Our production-ready architecture will include:
- Terraform for infrastructure provisioning and management
- GitHub Actions for CI/CD automation
- OCI Always Free resources for cost-effective deployment
- Multi-environment support (dev, staging, prod)
- Security best practices implementation
- Monitoring and logging setup
Target Infrastructure Components
Prerequisites
Before we begin, ensure you have:
- OCI Account with Always Free tier enabled
- GitHub Account with repository access
- Terraform installed locally (for development)
- OCI CLI configured
- Basic understanding of Terraform and GitHub Actions
Project Structure
Let’s start by creating a well-organized project structure:
infrastructure-as-code-pipeline/
├── terraform/
│ ├── environments/
│ │ ├── dev/
│ │ ├── staging/
│ │ └── prod/
│ ├── modules/
│ │ ├── networking/
│ │ ├── compute/
│ │ ├── storage/
│ │ └── monitoring/
│ ├── variables.tf
│ ├── outputs.tf
│ └── versions.tf
├── .github/
│ └── workflows/
│ ├── terraform-plan.yml
│ ├── terraform-apply.yml
│ └── terraform-destroy.yml
├── scripts/
│ ├── setup-oci.sh
│ └── validate-terraform.sh
├── docs/
│ ├── architecture.md
│ └── deployment-guide.md
├── README.md
└── .gitignore
Step 1: OCI Always Free Resources Setup
Understanding OCI Always Free Limits
OCI Always Free tier provides generous resources perfect for learning and small production workloads:
- 2 AMD-based Compute VMs (1/8 OCPU, 1 GB memory each)
- 2 ARM-based Compute VMs (1/4 OCPU, 6 GB memory each)
- 200 GB total storage
- 10 GB Object Storage
- 10 TB monthly outbound data transfer
- Load Balancer (10 Mbps)
- Autonomous Database (2 databases, 20 GB each)
OCI Authentication Setup
First, let’s set up OCI authentication for our pipeline:
# Install OCI CLI
bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
# Configure OCI CLI
oci setup config
Create a configuration file for our Terraform provider:
# terraform/providers.tf
terraform {
required_version = ">= 1.0"
required_providers {
oci = {
source = "oracle/oci"
version = "~> 5.0"
}
}
}
provider "oci" {
tenancy_ocid = var.tenancy_ocid
user_ocid = var.user_ocid
fingerprint = var.fingerprint
private_key_path = var.private_key_path
region = var.region
}
Step 2: Terraform Modules Design
Networking Module
Create a reusable networking module that follows OCI best practices:
# terraform/modules/networking/main.tf
resource "oci_core_vcn" "main" {
compartment_id = var.compartment_id
cidr_blocks = var.vcn_cidr_blocks
display_name = "${var.environment}-vcn"
dns_label = var.dns_label
}
resource "oci_core_subnet" "public" {
compartment_id = var.compartment_id
vcn_id = oci_core_vcn.main.id
cidr_block = var.public_subnet_cidr
display_name = "${var.environment}-public-subnet"
dns_label = "public"
security_list_ids = [oci_core_security_list.public.id]
route_table_id = oci_core_route_table.public.id
}
resource "oci_core_subnet" "private" {
compartment_id = var.compartment_id
vcn_id = oci_core_vcn.main.id
cidr_block = var.private_subnet_cidr
display_name = "${var.environment}-private-subnet"
dns_label = "private"
security_list_ids = [oci_core_security_list.private.id]
route_table_id = oci_core_route_table.private.id
}
resource "oci_core_security_list" "public" {
compartment_id = var.compartment_id
vcn_id = oci_core_vcn.main.id
display_name = "${var.environment}-public-security-list"
ingress_security_rules {
protocol = "6" # TCP
source = "0.0.0.0/0"
tcp_options {
min = 80
max = 80
}
}
ingress_security_rules {
protocol = "6" # TCP
source = "0.0.0.0/0"
tcp_options {
min = 443
max = 443
}
}
ingress_security_rules {
protocol = "6" # TCP
source = var.vcn_cidr_blocks[0]
tcp_options {
min = 22
max = 22
}
}
egress_security_rules {
protocol = "all"
destination = "0.0.0.0/0"
}
}
resource "oci_core_security_list" "private" {
compartment_id = var.compartment_id
vcn_id = oci_core_vcn.main.id
display_name = "${var.environment}-private-security-list"
ingress_security_rules {
protocol = "all"
source = var.vcn_cidr_blocks[0]
}
egress_security_rules {
protocol = "all"
destination = "0.0.0.0/0"
}
}
Compute Module
Create a compute module that leverages OCI Always Free resources:
# terraform/modules/compute/main.tf
data "oci_identity_availability_domains" "ads" {
compartment_id = var.compartment_id
}
data "oci_core_images" "ubuntu" {
compartment_id = var.compartment_id
operating_system = "Canonical Ubuntu"
operating_system_version = "22.04"
shape = var.instance_shape
state = "AVAILABLE"
sort_by = "TIMECREATED"
sort_order = "DESC"
}
resource "oci_core_instance" "app" {
count = var.instance_count
availability_domain = data.oci_identity_availability_domains.ads.availability_domains[0].name
compartment_id = var.compartment_id
display_name = "${var.environment}-app-${count.index + 1}"
shape = var.instance_shape
create_vnic_details {
subnet_id = var.subnet_id
assign_public_ip = var.assign_public_ip
}
source_details {
source_type = "image"
source_id = data.oci_core_images.ubuntu.images[0].id
}
metadata = {
ssh_authorized_keys = var.ssh_public_key
user_data = base64encode(templatefile("${path.module}/user_data.sh", {
environment = var.environment
app_version = var.app_version
}))
}
freeform_tags = {
Environment = var.environment
Project = var.project_name
ManagedBy = "Terraform"
}
}
# Load Balancer for high availability
resource "oci_load_balancer" "main" {
compartment_id = var.compartment_id
display_name = "${var.environment}-load-balancer"
shape = "flexible"
subnet_ids = [var.subnet_id]
shape_details {
minimum_bandwidth_in_mbps = 10
maximum_bandwidth_in_mbps = 10
}
}
resource "oci_load_balancer_backend_set" "main" {
load_balancer_id = oci_load_balancer.main.id
name = "app-backend-set"
policy = "ROUND_ROBIN"
health_checker {
protocol = "HTTP"
port = 80
url_path = "/health"
interval_ms = 10000
timeout_in_millis = 3000
retries = 3
}
}
resource "oci_load_balancer_backend" "main" {
count = var.instance_count
load_balancer_id = oci_load_balancer.main.id
backendset_name = oci_load_balancer_backend_set.main.name
ip_address = oci_core_instance.app[count.index].public_ip
port = 80
backup = false
drain = false
offline = false
weight = 1
}
resource "oci_load_balancer_listener" "main" {
load_balancer_id = oci_load_balancer.main.id
name = "app-listener"
default_backend_set_name = oci_load_balancer_backend_set.main.name
port = 80
protocol = "HTTP"
}
Storage Module
Implement object storage for static assets and backups:
# terraform/modules/storage/main.tf
resource "oci_objectstorage_bucket" "app_assets" {
compartment_id = var.compartment_id
name = "${var.environment}-app-assets-${random_string.bucket_suffix.result}"
access_type = "NoPublicAccess"
versioning = "Enabled"
}
resource "oci_objectstorage_bucket" "backups" {
compartment_id = var.compartment_id
name = "${var.environment}-backups-${random_string.bucket_suffix.result}"
access_type = "NoPublicAccess"
versioning = "Enabled"
retention_rules {
display_name = "backup-retention"
duration {
time_amount = 30
time_unit = "DAYS"
}
}
}
resource "random_string" "bucket_suffix" {
length = 8
special = false
upper = false
}
# Pre-authenticated request for secure access
resource "oci_objectstorage_preauthrequest" "app_assets_par" {
namespace = data.oci_objectstorage_namespace.ns.namespace
bucket = oci_objectstorage_bucket.app_assets.name
name = "${var.environment}-assets-par"
access_type = "ObjectRead"
time_expires = timeadd(timestamp(), "24h")
}
Step 3: Environment Configuration
Development Environment
# terraform/environments/dev/main.tf
module "networking" {
source = "../../modules/networking"
compartment_id = var.compartment_id
environment = "dev"
vcn_cidr_blocks = ["10.0.0.0/16"]
public_subnet_cidr = "10.0.1.0/24"
private_subnet_cidr = "10.0.2.0/24"
dns_label = "dev"
}
module "compute" {
source = "../../modules/compute"
compartment_id = var.compartment_id
environment = "dev"
instance_count = 1
instance_shape = "VM.Standard.A1.Flex" # ARM-based Always Free
subnet_id = module.networking.public_subnet_id
assign_public_ip = true
ssh_public_key = var.ssh_public_key
app_version = var.app_version
project_name = var.project_name
}
module "storage" {
source = "../../modules/storage"
compartment_id = var.compartment_id
environment = "dev"
}
Production Environment
# terraform/environments/prod/main.tf
module "networking" {
source = "../../modules/networking"
compartment_id = var.compartment_id
environment = "prod"
vcn_cidr_blocks = ["10.1.0.0/16"]
public_subnet_cidr = "10.1.1.0/24"
private_subnet_cidr = "10.1.2.0/24"
dns_label = "prod"
}
module "compute" {
source = "../../modules/compute"
compartment_id = var.compartment_id
environment = "prod"
instance_count = 2 # High availability
instance_shape = "VM.Standard.A1.Flex"
subnet_id = module.networking.public_subnet_id
assign_public_ip = true
ssh_public_key = var.ssh_public_key
app_version = var.app_version
project_name = var.project_name
}
module "storage" {
source = "../../modules/storage"
compartment_id = var.compartment_id
environment = "prod"
}
Step 4: GitHub Actions CI/CD Pipeline
Terraform Plan Workflow
# .github/workflows/terraform-plan.yml
name: "Terraform Plan"
on:
pull_request:
branches: [main]
push:
branches: [main]
paths:
- "terraform/**"
env:
TF_VERSION: "1.6.0"
jobs:
plan:
name: "Plan"
runs-on: ubuntu-latest
strategy:
matrix:
environment: [dev, staging, prod]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: ${{ env.TF_VERSION }}
- name: Setup OCI CLI
uses: oracle-actions/setup-oci-cli@v1
with:
auth-type: "api_key"
tenancy: ${{ secrets.OCI_TENANCY_OCID }}
user-id: ${{ secrets.OCI_USER_OCID }}
fingerprint: ${{ secrets.OCI_FINGERPRINT }}
key-content: ${{ secrets.OCI_PRIVATE_KEY }}
- name: Terraform Init
working-directory: terraform/environments/${{ matrix.environment }}
run: terraform init
- name: Terraform Format Check
working-directory: terraform/environments/${{ matrix.environment }}
run: terraform fmt -check
- name: Terraform Validate
working-directory: terraform/environments/${{ matrix.environment }}
run: terraform validate
- name: Terraform Plan
working-directory: terraform/environments/${{ matrix.environment }}
run: terraform plan -out=tfplan
env:
TF_VAR_compartment_id: ${{ secrets.OCI_COMPARTMENT_ID }}
TF_VAR_ssh_public_key: ${{ secrets.SSH_PUBLIC_KEY }}
- name: Upload Plan Artifacts
uses: actions/upload-artifact@v4
with:
name: terraform-plan-${{ matrix.environment }}
path: terraform/environments/${{ matrix.environment }}/tfplan
Terraform Apply Workflow
# .github/workflows/terraform-apply.yml
name: "Terraform Apply"
on:
workflow_run:
workflows: ["Terraform Plan"]
types:
- completed
branches: [main]
env:
TF_VERSION: "1.6.0"
jobs:
apply:
name: "Apply"
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' }}
strategy:
matrix:
environment: [dev, staging, prod]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: ${{ env.TF_VERSION }}
- name: Setup OCI CLI
uses: oracle-actions/setup-oci-cli@v1
with:
auth-type: "api_key"
tenancy: ${{ secrets.OCI_TENANCY_OCID }}
user-id: ${{ secrets.OCI_USER_OCID }}
fingerprint: ${{ secrets.OCI_FINGERPRINT }}
key-content: ${{ secrets.OCI_PRIVATE_KEY }}
- name: Download Plan Artifacts
uses: actions/download-artifact@v4
with:
name: terraform-plan-${{ matrix.environment }}
path: terraform/environments/${{ matrix.environment }}
- name: Terraform Init
working-directory: terraform/environments/${{ matrix.environment }}
run: terraform init
- name: Terraform Apply
working-directory: terraform/environments/${{ matrix.environment }}
run: terraform apply tfplan
env:
TF_VAR_compartment_id: ${{ secrets.OCI_COMPARTMENT_ID }}
TF_VAR_ssh_public_key: ${{ secrets.SSH_PUBLIC_KEY }}
- name: Notify Deployment
if: success()
run: |
echo "✅ Infrastructure deployed successfully to ${{ matrix.environment }}"
# Add your notification logic here (Slack, Teams, etc.)
Step 5: Security Best Practices
IAM Policies
Implement least-privilege access:
# terraform/modules/security/main.tf
resource "oci_identity_policy" "terraform_policy" {
name = "terraform-policy"
description = "Policy for Terraform automation"
compartment_id = var.compartment_id
statements = [
"allow group terraform-group to manage all-resources in compartment id ${var.compartment_id}",
"allow group terraform-group to read audit-logs in compartment id ${var.compartment_id}",
"allow group terraform-group to manage object-family in compartment id ${var.compartment_id}"
]
}
resource "oci_identity_dynamic_group" "compute_instances" {
compartment_id = var.compartment_id
description = "Dynamic group for compute instances"
matching_rule = "Any {instance.id = '${var.instance_ocids}'}"
name = "compute-instances-dg"
}
resource "oci_identity_policy" "compute_policy" {
name = "compute-policy"
description = "Policy for compute instances"
compartment_id = var.compartment_id
statements = [
"allow dynamic-group compute-instances-dg to read object-family in compartment id ${var.compartment_id}",
"allow dynamic-group compute-instances-dg to manage instance-family in compartment id ${var.compartment_id}"
]
}
Network Security
Implement proper network segmentation:
# Additional security list rules
resource "oci_core_security_list" "private" {
# ... existing configuration ...
ingress_security_rules {
protocol = "6" # TCP
source = var.public_subnet_cidr
tcp_options {
min = 3306 # MySQL
max = 3306
}
}
ingress_security_rules {
protocol = "6" # TCP
source = var.public_subnet_cidr
tcp_options {
min = 6379 # Redis
max = 6379
}
}
}
Step 6: Monitoring and Observability
Cloud Monitoring Setup
# terraform/modules/monitoring/main.tf
resource "oci_monitoring_alarm" "cpu_utilization" {
compartment_id = var.compartment_id
display_name = "${var.environment}-cpu-utilization-alarm"
is_enabled = true
metric_compartment_id = var.compartment_id
namespace = "oci_computeagent"
query = "CpuUtilization[1m].mean() > 80"
severity = "WARNING"
destinations = [oci_ons_notification_topic.main.topic_id]
}
resource "oci_monitoring_alarm" "memory_utilization" {
compartment_id = var.compartment_id
display_name = "${var.environment}-memory-utilization-alarm"
is_enabled = true
metric_compartment_id = var.compartment_id
namespace = "oci_computeagent"
query = "MemoryUtilization[1m].mean() > 85"
severity = "WARNING"
destinations = [oci_ons_notification_topic.main.topic_id]
}
resource "oci_ons_notification_topic" "main" {
compartment_id = var.compartment_id
name = "${var.environment}-notifications"
}
resource "oci_ons_subscription" "email" {
compartment_id = var.compartment_id
endpoint = var.notification_email
protocol = "EMAIL"
topic_id = oci_ons_notification_topic.main.id
}
Step 7: Application Deployment
User Data Script
Create a user data script for automated application deployment:
#!/bin/bash
# terraform/modules/compute/user_data.sh
# Update system
apt-get update
apt-get upgrade -y
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
usermod -aG docker ubuntu
# Install Docker Compose
curl -L "https://github.com/docker/compose/releases/download/v2.20.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Create application directory
mkdir -p /opt/app
cd /opt/app
# Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
app:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped
EOF
# Create nginx configuration
cat > nginx.conf << 'EOF'
events {
worker_connections 1024;
}
http {
upstream app_backend {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name _;
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
EOF
# Start application
docker-compose up -d
# Install monitoring agent
curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh | bash
Step 8: Cost Optimization
Always Free Resource Monitoring
Create a cost monitoring script:
#!/bin/bash
# scripts/monitor-costs.sh
# Check Always Free resource usage
echo "=== OCI Always Free Resource Usage ==="
# Check compute instances
echo "Compute Instances:"
oci compute instance list --compartment-id $OCI_COMPARTMENT_ID --query "data[?shape=='VM.Standard.A1.Flex'].{Name:display-name,Shape:shape,State:lifecycle-state}" --output table
# Check storage usage
echo "Storage Usage:"
oci objectstorage bucket list --compartment-id $OCI_COMPARTMENT_ID --query "data[].{Name:name,Size:approximate-size,Count:approximate-count}" --output table
# Check load balancer
echo "Load Balancer:"
oci lb load-balancer list --compartment-id $OCI_COMPARTMENT_ID --query "data[].{Name:display-name,Shape:shape,State:lifecycle-state}" --output table
Step 9: Testing and Validation
Infrastructure Testing
Create comprehensive tests for your infrastructure:
# terraform/tests/main.tf
# This file contains test configurations using Terratest
resource "oci_core_vcn" "test_vcn" {
compartment_id = var.test_compartment_id
cidr_blocks = ["10.0.0.0/16"]
display_name = "test-vcn"
}
# Test that VCN is created successfully
output "vcn_id" {
value = oci_core_vcn.test_vcn.id
}
# Test that subnets are created
output "subnet_count" {
value = length(oci_core_subnet.test_subnets)
}
Automated Testing Script
#!/bin/bash
# scripts/test-infrastructure.sh
set -e
echo "Running infrastructure tests..."
# Test Terraform syntax
echo "Testing Terraform syntax..."
for env in dev staging prod; do
echo "Testing $env environment..."
cd terraform/environments/$env
terraform init
terraform validate
terraform plan -detailed-exitcode
cd ../../..
done
# Test OCI connectivity
echo "Testing OCI connectivity..."
oci compute instance list --compartment-id $OCI_COMPARTMENT_ID --limit 1
# Test load balancer health
echo "Testing load balancer health..."
LB_IP=$(oci lb load-balancer list --compartment-id $OCI_COMPARTMENT_ID --query "data[0].ip-addresses[0].ip-address" --raw-output)
curl -f http://$LB_IP/health
echo "All tests passed! ✅"
Step 10: Disaster Recovery
Backup Strategy
Implement automated backups:
# terraform/modules/backup/main.tf
resource "oci_core_volume_backup_policy" "daily_backup" {
compartment_id = var.compartment_id
display_name = "${var.environment}-daily-backup-policy"
schedules {
backup_type = "INCREMENTAL"
period = "ONE_DAY"
retention_seconds = 2592000 # 30 days
time_zone = "UTC"
}
}
resource "oci_core_volume_backup_policy_assignment" "backup_assignment" {
asset_id = var.boot_volume_id
policy_id = oci_core_volume_backup_policy.daily_backup.id
}
Best Practices Summary
Security
- ✅ Use least-privilege IAM policies
- ✅ Implement network segmentation
- ✅ Enable VCN flow logs
- ✅ Use security lists and NSGs
- ✅ Encrypt data at rest and in transit
Cost Optimization
- ✅ Leverage Always Free tier resources
- ✅ Implement auto-scaling policies
- ✅ Use spot instances where possible
- ✅ Monitor resource usage
- ✅ Set up cost alerts
Reliability
- ✅ Multi-AZ deployment
- ✅ Load balancer for high availability
- ✅ Automated backups
- ✅ Health checks and monitoring
- ✅ Disaster recovery plan
Performance
- ✅ Use appropriate instance shapes
- ✅ Implement caching strategies
- ✅ Optimize network configuration
- ✅ Monitor performance metrics
- ✅ Auto-scaling based on demand
Conclusion
This Infrastructure as Code pipeline demonstrates a production-ready architecture using Terraform, GitHub Actions, and OCI Always Free resources. The solution provides:
- Automated infrastructure provisioning with version control
- Multi-environment support (dev, staging, prod)
- Security best practices implementation
- Cost-effective deployment using Always Free resources
- High availability with load balancing
- Monitoring and alerting for operational excellence
- Disaster recovery capabilities
The architecture is scalable, maintainable, and follows DevOps best practices. You can extend this foundation by adding more advanced features like:
- Kubernetes clusters for container orchestration
- Database services for persistent data
- CDN integration for global content delivery
- Advanced monitoring with Grafana and Prometheus
- Service mesh for microservices communication
Remember to regularly review and update your infrastructure code, monitor costs, and stay current with OCI’s Always Free tier offerings and best practices.
This article demonstrates how to build a production-ready Infrastructure as Code pipeline. The complete source code and additional resources are available in the GitHub repository.
Building Scalable DevOps Infrastructure
A comprehensive guide to designing and implementing scalable DevOps infrastructure that grows with your organization
No Next Article
This is the latest article
Stay Updated
Get the latest DevOps insights and best practices delivered to your inbox
No spam, unsubscribe at any time