Terraform – an introduction into scalable architecture

4 September 2020

Let’s face it. The first developer or cloud architect breathing the basement air of a terraformed Mars is still a few centuries away.

And even then, the high ping would be a deal-breaker.

But why not have the best of both worlds and start with terraforming clouds. There’s a method in development called DRY (Don’t Repeat Yourself), and together with automation and version control, we can create cloud architecture that is both functional and scalable.

Terraform

I’m talking about Hashicorp’s Terraform, for safely building and changing infrastructure in the most popular cloud service solutions, such as Amazon Web Services, Azure, and Google Cloud. It allows us to create Infrastructure as Code (IaC).

Tales from the industry – A “simple” project

Imagine the following scenario. A stakeholder approaches you, as a developer/operations and has an idea for a project, with a few requirements. They want a small machine to run a web crawler on, with the results landing in a file-storage. They hand you access to an AWS account and leave.

You could log in to your account, create an IAM user, start an EC2 machine, and create an S3 Bucket, no problem. After finishing your work, you present it to the stakeholders. They’re happy, but there are few new requirements.

  • The project should move to another AWS account.
  • The EC2 needs additional CPUs and RAM.
  • The project requires a staging and production environment.
  • The project might move to Azure in the future.

These are all examples where manual work on cloud architecture becomes tedious, less fun, and, most of all, near impossible to scale and maintain in the future.

So how do we prevent this with Terraform?

A “simple” project visualized

This graphic represents the scenario mentioned above. We have a single developer with an IAM account who’s working on a project and creating AWS services, such as a VPC, S3 Bucket, and an EC2 by hand.

Once the project grows, needing several developers to work in parallel, or when things in the cloud have to be tweaked, the issues can quickly stack and lead to dissatisfaction within the team and for stakeholders.

A “simple” project visualized – Terraform edition

For security purposes, we split our roles into non-managing developers, and a single managing, terraform IAM user. Within the terraform script, we define an S3 bucket in which we keep our versions of the created scripts. By locking the S3 bucket during the writing process of a state file, we get a functioning version control, which allows developers to work in parallel.

The scripts have their own terraform IAM user, which uses the previously mentioned state file to know which machines are running and with which settings. This IAM user can create, read, update, and delete cloud infrastructure.

How it works – An example

Imagine we built the architecture from the second graphic. If we need additional resources in the cloud or want to remove some, all we need to do is change a few lines in a script. Terraform will check what has changed against its state file in the S3 bucket and, after confirmation, apply all the changes in the existing infrastructure.

Conclusion

With this architecture, it’s possible to grow the development team, scale services, and move to several different cloud providers.

I hope you’ve learned something from this article, no matter if you’re a developer, in operations, IT, or even if you work as a project manager. If this got your attention, go check out Terraform.

If you want to know what does a data architect do, or which career opportunity there is in the field of data architecture, do not hesitate to contact us.

Back to top