Terraform automation on IBM Cloud (#2/2): Provisioning a PostgreSQL Database on IBM Cloud using Terraform

Alain Airom (Ayrom)
5 min readOct 27, 2021


This article is the second of the series on automation on IBM Cloud using Terraform. In the previous article (https://alain-airom.medium.com/terraform-automation-on-ibm-cloud-%C2%BD-provisioning-a-kubernetes-cluster-on-ibm-cloud-using-edb939a966c8) I showed how it is simple to provision an IKS (IBM managed Kubernetes Cluster) with Terraform. In this one, I’ll show how to provision a PostgreSQL database and also how populate some data inside the database using automation.

So what is PostgreSQL? PostgreSQL is a powerful, open source object-relational database that is highly customizable. It’s a feature-rich enterprise database with JSON support, giving you the best of both the SQL and NoSQL worlds. Databases for PostgreSQL makes PostgreSQL even better by managing it for you. Features include high availability, automated backup orchestration, autoscaling, and de-coupled allocation of storage, RAM, and vCPUs. Databases for PostgreSQL pricing is based on underlying disk, RAM, and optional vCPU allocation, as well as backup storage usage. The service is HIPAA-Ready and compliant with PCI-DSS, SOC 1 Type 2, SOC 2 Type 2, ISO 27001, ISO 27017, ISO 27018, ISO 27701, and GDPR. You can also learn more by viewing docs, API docs, and terms.

IBM Cloud goes with a user friendly interface for either provisioning new services or managing them. However sometime (very often actually :D) automation is needed.

In one of my recent assignments, I was asked to provide a Terraform code which is to be used inside a DevSecOps toolchain on IBM Cloud, so that users can provision automatically a PostgreSQL service.

The initial steps for this task are quite the same as the one described on IKS part.

First of all, the variables should be set, either locally for the test purpose, or on the server side. The example below is to set the variables locally;



export TF_VAR_ibmcloud_api_key=”ibmcloud_api_key_xxxxxxx”

export TF_VAR_iaas_classic_username=”the_account_name_xxxxxxx”

export TF_VAR_iaas_classic_api_key=”ibmcloud_iaas_api_key_xxxxxxx”

export TF_VAR_ibmcloud_region=”ibmcloud_region_xxxxxxx”


A list of IBM cloud regions to be found here: https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones

Otherwise it could retrieved on command line (assuming the ibmcloud CLI is present);

ibmcloud regions

Note: In order to initialize the variables with the script above on your local machine, it should be run with “source” option instead of execution of a standard bash file.

On server side though, the variables are set in the “providers.tf” file.

provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
resource_group = var.resource_group
region = var.ibmcloud_region


The variables on local machine or server side could be set by the “variable.tf” file as shown below.


variable "ibmcloud_api_key" {}

variable "resource_group" {
default = "xxxxxx"

variable "ibmcloud_region" {
default = "xxxxx"

variable "zone" {
default = "xxxx"

variable "db_name" {
description = "Name of the PostgreSQL database"
type = string
default = "xxxxx"


Versions file

The “versions.tf” file should be something like the example below.


terraform {
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
version = "1.23.0"


Main Script

After the steps above, the database creation is very straight forward. The main Terraform script as provided below does the provisioning of your database;

data "ibm_resource_group" "icd_postgres_group" {
name = var.resource_group

resource "ibm_database" "icd_postgres" {
name = var.db_name
location = var.ibmcloud_region
service = "databases-for-postgresql"
plan = "standard"
adminpassword = "xxxxxxx"
# If not provided it takes the default resource group.
resource_group_id = data.ibm_resource_group.icd_postgres_group.id

version = "12"
# Total amount of memory to be shared between the DB members
# Postgres has 2 members by default.

members_memory_allocation_mb = 3072
members_disk_allocation_mb = 20480

timeouts {
# Default timeout is 10mins
create = "15m"
delete = "3m"

provisioner "local-exec" {
environment = {
db_name = var.db_name
IBMCLOUD_API_KEY = var.ibmcloud_api_key
ibmcloud_region = var.ibmcloud_region
ibmcloud_resource_group = var.resource_group

working_dir = "postgres"
command = "prepcommands-v2.sh"


As you see, some parameters are hard coded, because it was intended to be this way. They could easily be set as parameters as well;

adminpassword     = "xxxxxxx"
members_memory_allocation_mb = 3072
members_disk_allocation_mb = 20480
timeouts {
# Default timeout is 10mins
create = "15m"
delete = "3m"

The Terraform script execution is done through the classic steps of;

terraform init
terraform plan
terraform apply

To set the database specific parameters;

terraform plan -target="ibm_database.icd_postgres"
terraform apply -target="ibm_database.icd_postgres"

Populating the database

If you need the database be prepared with some data, as soon as it is provisioned, it could be done through SQL commands running in *sql files for example.

In this specific case, I used the following bash script to do all the necessary tasks so that the database contained the required information provided in different SQL script files. Using SQL script files is quite handy, because separated tasks could be put in specific script files, and you can add or remove any of those tasks if needed. Debugging is more easy because each SQL script is isolated and handles one specific task.

As shown in the script below, once logged into IBM cloud, the database information is extracted as a JSON file and all required parameters are parsed using JQ.


### The name of the PostgreSQL db is set by the Terraform script which provisionned the databse
echo "Retrieving the $db_name database"

### logging into IBM Cloud
echo "Logging into IBM Cloud using IBMCLOUD_API_KEY environment variable"
ibmcloud login -a cloud.ibm.com -r $ibmcloud_region -g $ibmcloud_resource_group

### export the database information in a JSON format
echo "Exporting database $db_name properties to JSON"
ibmcloud cdb cxn --all --json $db_name > db_desc.json

echo "Setting database connectivity parameters"

export PGHOST=$(cat db_desc.json | jq -r '.connection.postgres.hosts[0].hostname')
export PGPORT=$(cat db_desc.json | jq -r '.connection.postgres.hosts[0].port')
export PGDATABASEPATH=$(cat db_desc.json | jq -r '.connection.postgres.database')

#export PGDATABASE=${PGDATABASE:-ibmclouddb}
export PGUSER=${PGUSER:-admin}
export PGPASSWORD=${PGPASSWORD:-adminpassword}


echo "Populating the database $db_name"

psql -X -c "DROP database if exists POSTGRES_DB" -f db_step1.sql -f db_step2.sql -f db_step3.sql -f db_step4.sql -f db_step5.sql -f db_step6.sql -f db_step7.sql -f db_step8.sql -c "COMMIT"

Voilà, all is done by very little effort thanks to Terraform and IBM Cloud :D

Useful information on IBM Cloud Terraform for clusters could be fined here: https://registry.terraform.io/providers/IBM-Cloud/ibm/latest/docs/resources/container_cluster

JQ tool: https://stedolan.github.io/jq/



Alain Airom (Ayrom)

IT guy for a long time, looking for technical challenges everyday!