Complete guide for picking the right tool for Terraform Security Code Analysis
Are you puzzled from the wide range of choices of static analysis tools for Terraform? We looked at the tooling to identify security vulnerabilities and misconfigurations for AWS and GCP. The motivation behind this was to unify different preferences of engineers at Revolgy in order to provide improved and more consistent secure services to our customers.
Before we set out to choose the Terraform security scanning tool that will best suit our needs, we checked other posts concerning themselves with this topic. We found most of them either subjective, incomplete, not answering our questions or not meeting the needs we currently have. That's why we tested and compared them by ourselves for you! We started our own testing PoC and evaluation based on various metrics including false positive and false negative rating, integration options and quality of the recommendations itself.
The main motivation for this task was to find the best tool to include in our infrastructure pipeline that will help us identify security issues in Terraform code defining AWS and GCP infrastructure. Therefore we didn’t include formatting and linting tools like tflint in this comparison.We also avoided testing frameworks such as conftest, kitchen-terraform, terrafirma, terraform-compliance or terratest. We didn't focus on additional testing of Kubernetes, Ansible or other IaC platforms. Although these are interesting tools (and we know that with some work we could use them to do the intended security scanning) there are other tools oriented towards this particular task that are more plug-and-play.
In the end we decided to compare a specific set of tools.
Aspects that we wanted to consider as the evaluation metrics were (ordered by priority):
1. Ability to scan Terraform code defining AWS and GCP resources for security issues
2. Quality of security issue findings (positive vs. false positive findings) and also their connection to AWS/GCP and Terraform documentation.
3. License and pricing.
4. Ability to be run by engineers on demand.
5. Ability to run in the GitLab pipeline (direct integration and/or JUnitXML output is a plus)
6. Ability to filter specific rules / ignore specific findings (mitigating false positives or accepting risk)
7. Ability to add and develop your own security rules
8. Machine processable output like JSON, XML or CSV is needed for future integrations. (ELK stack, DefectDojo and similar)
Feature set comparison
To start with the easier part we decided to gather the features of the compared tools based on our requirements and also add other ones that we thought could be interesting or useful. This was done by installing the tools, experimenting with them, checking their websites, code repositories, documentation, command line help, outputs, etc. The result of these efforts can be seen in the following table.
|Open-source||Y||(agent part only)||Y||Y|
|- TF AWS||Y||Y||Y||Y|
|- TF GCP||Y||Y||Y||Y|
|- TF Azure||Y||Y||Y||Y|
|Integrations to CI/CD:|
|- GitLab||Y (link)||Y (link)||Y (link)||N
(but has JUnitXml)
|- GitHub||Y (link)||Y (link)||Y (link)||Y (link)|
|- BitBucket||Y (link)||Y (link)||N||N
(but has JUnitXml)
|Whitelisting rules||Y (via CLI option)||N||Y (link)||N|
|Blacklisting rules||Y (via CLI option)||N||Y (link)||Y (via CLI option)|
|Blacklisting specific issue||Y (via comment)||Y (via scan report)||Y (link)||Y (via comment)|
|Adding own custom rules||Y (link)||N||N||Y (link)|
|- CLI output||Y||Y||Y||Y|
|- XML||(as JUnitXML only)||N||Y||(as JUnitXML only)|
|- HTML||N||Y (link)||N||N|
|Documentation||Y (link)||Y (link)||Y (link)||Y (link)|
Spotlight on Checkov
Checkov provides very easy to run scanning over repo directory with possibility of your own checks. The checks are written in python so some coding skills are needed in comparison with tfsec. If you are not skilled enough you can use policy builder via UI, which is very intuitive and offers connection to several benchmarks and standards like HIPAA, CIS, NIST. You see the dashboard with errors by policy or failures by benchmark.
Checkov supports running only / skipping specific checks:
checkov -d . --check CKV_AWS_20,CK_AWS_52 checkov -d . --skip-check CK_AWS_52,CK_AWS_52
Checkov is also offering a paid version. The Web part of Checkov offers a very good remediation description for CLI steps. Checkov uses Bridgecrew's API to enrich the results with links to remediation guides. To skip this API call you can use the flag
Finding is referenced via a range of lines where you need to look for specific attributes in contrast with snyk. Checkov offers automated remediation or manual fixes. Interesting feature is error history which can show you when the engineering errors were introduced during the resource lifetime. For fans of automated playbooks there is also automated remediation with CloudFormation stacks, but we didn't test this feature in our PoC. Auto-fix is part of the paid version.
Spotlight on Terrascan
Terrascan supports around 500 policies similarly as other tested tools. Interesting is that it adds some great features like scanning of Helm v3 and Kustomize v3. Very promising feature is running as an API server. If you are maintaining DevSecOps microservice pipeline, then it is the right tool for you.
Terrascan is also available as GitHub action. But what we want to raise is known_hosts file for Terrascan in Docker container. You can use a known host file to define the connectivity to GitLab or GitHub via ssh. Terrascan clones your repository code into the container and scans it. Idea of API server and container can be merged in usage of AWS EKS (alternatively GKE) or ECS.
Terrascan performs very poorly in definitions of the tasks and remediation description. Terrascan also underperforms on GCP terraform code. It does not provide links to advanced help or examples of good coding practice. Terrascan has Notifier providing webhooks for the results.
Spotlight on tfsec
Tfsec is having quite good remediation and recommendation details with links to AWS, GCP or Terraform documentation. It misses impact severity and security background as in other tools in our selection. Secure example feature is a very good approach for showing secure code.
Worth mentioning is PR commenter which adds a comment to any area of the code which fails the tfsec scan. Important additional feature is the ability to create custom checks. These checks are defined as simple JSON or YAML so you do not need to write additional GO code.
If you are running dockerized pipeline you can run tfsec in docker, and not only tfsec. With Lambda AWS container image support it starts to be very interesting for native DevSecOps serverless scanning.
Spotlight on Snyk
Snyk is a fantastic tool when you need more stuff than Terraform scanning. If you need Software composition analysis, Kubernetes configuration scans etc., then it is a very good choice. When you focus purely on Terraform files, the performance is very good but the description of the issues is very vague and missing links to documentation of AWS/ GCP/ Terraform. We also looked at the remediated findings in the final report. The report references the exact line, resource and attribute in Terraform, which we consider as perfect for speeding up the code review.
The import of the repository is very easy and you just need to define your GitLab API key scopes to give read only access to the selected repository. Problem comes with adding new Terraform files which require reimport of the repository/project. Monitor feature of the Snyk agent is good for SCA but does not work on Terraform code well.
There is also an option of using Snyk broker in case you run a private repository. Broker proxy is critical if you need to use public API and private code management. The Broker client is published as a set of Docker images and provides secure connectivity to on premise JIRA.
If you do not want to use public reporting and maximise the output enrichment, you can choose to utilise snyk-to-html. This utility takes json output and creates a nice looking HTML report.
Using HTML report generator for snyk in code directory example:
snyk iac test --json | snyk-to-html -o results.html
The hardest part - Qualitative security comparison
The harder part of this task was to fairly compare the selected tools based on the quality of the security findings. Initially, we tried different internal repos as the testing dataset (Terraform files), but since we wanted the results to be shareable with the community and replicable, we decided to go with the publicly available repository terragoat.
Terragoat is a code repository containing intentionally vulnerable terraform codes with the resources for AWS, GCP and Azure. Although it may seem biased to use the testing repository of one of the compared tools as the dataset, we still went with it because it just meant that findings of checkov in terragoat will be the baseline and other tools can perform either better, the same or worse. The Terragoat contains most commonly used IaC resources like EC2, S3, IAM, RDS, EKS or their GCP / Azure equivalents, so we expected the findings of the different tools to be mostly from the same category.
We created a very simple shell script that installed the tools and made them scan the AWS and GCP codes. We saved the results in JSON, because that was the only output they all had in common (refer to the feature table). Then we changed the shape of the findings using jq so that every issue finding contained:
- The name of the tool that found it.
- Position of the finding (defined by filename; resource; code line or line range in which the issue was found).
- The security finding (defined by ID and description).
After that we concatenated all the results, sorted them by filename, resource, tool and issue finding ID. Then the tedious manual work started - based on the location and issue description matching the issue findings from one tool to the others. After that we picked very simple metric - number of unique findings per tool:
It’s interesting to see in these results that Snyk performed exceptionally well on the AWS Terraform code, because it found more issues than checkov. Among the issues that Snyk found and checkov did not, were things like fully open egress or missing description of AWS security groups or load balancer facing the internet. These can be considered more like warnings or good practice reminders than real security issues, but to be fair all the tools had such findings. None of the tools is able to list insecure protocols or their non-encrypted versions.
Another interesting thing to notice, is that all four tools were performing relatively similarly on the AWS files, and there were lots of issues that were found by three (7) or even all four tools (6), while in the GCP files there were only 4 issues on which three tools agreed and there was no issue which would be discover by all four tools.
One more, a rather peculiar thing is that terrascan rule descriptions for GCP are pretty much identical to the ones used by Checkov. Coincidence?
Everybody can find benefits in different tools based on specific needs and mainly integrations. Some companies can benefit from paid versions due integrations and reporting options. Also when you will look for your SAST tool for IaC, ask yourself if you also want to test kubernetes configs, open source libraries and docker images. More about this topic, maybe, in one of our next posts. What is your choice?
Marko Fábry, Cloud Architect
Marek Šottl, Cloud Security Engineer