Terraform automation on IBM Cloud (#2/2): Provisioning a PostgreSQL Database on IBM Cloud using Terraform

This article is the second of the series on automation on IBM Cloud using Terraform. In the previous article (https://alain-airom.medium.com/terraform-automation-on-ibm-cloud-%C2%BD-provisioning-a-kubernetes-cluster-on-ibm-cloud-using-edb939a966c8) I showed how it is simple to provision an IKS (IBM managed Kubernetes Cluster) with Terraform. In this one, I’ll show how to provision a PostgreSQL database and also how populate some data inside the database using automation.

So what is PostgreSQL? PostgreSQL is a powerful, open source object-relational database that is highly customizable. It’s a feature-rich enterprise database with JSON support, giving you the best of both the SQL and NoSQL worlds. Databases for PostgreSQL makes PostgreSQL even better by managing it for you. Features include high availability, automated backup orchestration, autoscaling, and de-coupled allocation of storage, RAM, and vCPUs. Databases for PostgreSQL pricing is based on underlying disk, RAM, and optional vCPU allocation, as well as backup storage usage. The service is HIPAA-Ready and compliant with PCI-DSS, SOC 1 Type 2, SOC 2 Type 2, ISO 27001, ISO 27017, ISO 27018, ISO 27701, and GDPR. You can also learn more by viewing docs, API docs, and terms.

IBM Cloud goes with a user friendly interface for either provisioning new services or managing them. However sometime (very often actually :D) automation is needed.

In one of my recent assignments, I was asked to provide a Terraform code which is to be used inside a DevSecOps toolchain on IBM Cloud, so that users can provision automatically a PostgreSQL service.

The initial steps for this task are quite the same as the one described on IKS part.

First of all, the variables should be set, either locally for the test purpose, or on the server side. The example below is to set the variables locally;

===========================

##!/bin/bash

export TF_VAR_ibmcloud_api_key=”ibmcloud_api_key_xxxxxxx”

export TF_VAR_iaas_classic_username=”the_account_name_xxxxxxx”

export TF_VAR_iaas_classic_api_key=”ibmcloud_iaas_api_key_xxxxxxx”

export TF_VAR_ibmcloud_region=”ibmcloud_region_xxxxxxx”

===========================

A list of IBM cloud regions to be found here: https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones

Otherwise it could retrieved on command line (assuming the ibmcloud CLI is present);

Note: In order to initialize the variables with the script above on your local machine, it should be run with “source” option instead of execution of a standard bash file.

On server side though, the variables are set in the “providers.tf” file.

Variables

The variables on local machine or server side could be set by the “variable.tf” file as shown below.

===========================

===========================

Versions file

The “versions.tf” file should be something like the example below.

============================

===============================

Main Script

After the steps above, the database creation is very straight forward. The main Terraform script as provided below does the provisioning of your database;

As you see, some parameters are hard coded, because it was intended to be this way. They could easily be set as parameters as well;

The Terraform script execution is done through the classic steps of;

To set the database specific parameters;

Populating the database

If you need the database be prepared with some data, as soon as it is provisioned, it could be done through SQL commands running in *sql files for example.

In this specific case, I used the following bash script to do all the necessary tasks so that the database contained the required information provided in different SQL script files. Using SQL script files is quite handy, because separated tasks could be put in specific script files, and you can add or remove any of those tasks if needed. Debugging is more easy because each SQL script is isolated and handles one specific task.

As shown in the script below, once logged into IBM cloud, the database information is extracted as a JSON file and all required parameters are parsed using JQ.

Voilà, all is done by very little effort thanks to Terraform and IBM Cloud :D

Useful information on IBM Cloud Terraform for clusters could be fined here: https://registry.terraform.io/providers/IBM-Cloud/ibm/latest/docs/resources/container_cluster

JQ tool: https://stedolan.github.io/jq/

IT guy for a long time, looking for technical challenges everyday!