How to Generate Terraform from Existing AWS Resources
Your AWS account is full of resources someone clicked into existence. Here is how to turn that into managed Terraform without months of manual work.
Key Takeaways
- ops0 Discovery scans 100+ AWS resource types and generates production-ready Terraform automatically
- A full scan of 3 regions takes 60-90 seconds, compared to weeks of manual terraform import work
- Dependency relationships between resources are preserved in the generated code
- Checkpoint-based scanning handles failures in large accounts without restarting
You can generate Terraform from existing AWS resources by scanning your account with a discovery tool that reads the AWS APIs, maps every resource and its dependencies, and outputs HCL files that represent your current state. ops0 does this automatically across 100+ AWS resource types including EC2, VPC, S3, RDS, Lambda, EKS, ECS, IAM, Route53, and CloudFront. A full scan of 3 regions takes about 60-90 seconds. For accounts with 1,000+ resources, expect 2-5 minutes.
The alternative is doing it by hand. Import each resource one at a time with terraform import, write the matching HCL, run terraform plan to check for diff, fix the drift, repeat. For a typical AWS account with a few hundred resources, that's weeks of work. Maybe months.
Why This Problem Exists
Most AWS infrastructure wasn't created with Terraform. Someone spun up an EC2 instance from the console during a demo. A developer created an S3 bucket for a quick test. The networking team built out VPCs manually because "we'll codify it later." Later never came.
Now you've got a production environment running on resources that have no code backing them. No version control. No audit trail. No way to reproduce the setup if something goes wrong.
This is called brownfield infrastructure, and it's the norm, not the exception. Most companies have more unmanaged resources than managed ones.
The Manual Approach and Why It Breaks Down
Terraform has a built-in import command. You can run terraform import aws_instance.example i-1234567890abcdef0 and it pulls the resource into your state file. Then you write the matching HCL by hand.
The problems start immediately. You need to know the resource type and its ID. You need to write HCL that matches the current config exactly, or terraform plan will show drift on your first run. Dependencies between resources have to be figured out manually. If your security group references a VPC, you need both imported and wired up correctly.
For a single resource, this is tedious but doable. For a real AWS account with hundreds of resources and complex dependency chains, it's a project that takes an engineer off productive work for weeks.
Terraform 1.5 added import blocks, which help. You can declare imports in config files instead of running CLI commands. But you still have to write all the HCL yourself. The import block just handles state, not code.
How Automated Discovery Works
Automated discovery takes a different approach. Instead of importing resources one at a time, it scans your entire AWS account through the APIs, reads every resource's full configuration, maps the dependency relationships between them, and generates the Terraform code.
ops0's Discovery process works like this:
1. Connect your AWS account (read-only IAM role)
2. Discovery scans all regions and resource types automatically
3. Each resource gets its full configuration captured: tags, metadata, security groups, IAM policies, networking, everything
4. Dependency relationships are mapped. Your EC2 instance references a security group which references a VPC. That chain is preserved.
5. Terraform HCL is generated with proper resource ordering and dependency references
The output is production-ready code. Not a rough draft that needs hours of cleanup. The dependency ordering is correct, the references between resources use proper Terraform syntax, and the code matches what's actually running.
What Gets Captured
A good discovery tool doesn't just list resource names. For each AWS resource, ops0 captures the full configuration state, all tags and metadata, the region and availability zone, pricing tier and instance type, IAM policies and security group rules, network configuration (VPC, subnet, route tables), encryption settings, and relationships to other resources.
This matters because Terraform needs all of it. If you miss a security group rule or an IAM policy attachment, your first terraform plan will show drift and you're back to manual fixing.
Handling the Edge Cases
Real AWS accounts are messy. Discovery has to handle resources that depend on each other in circular ways, resources that were partially configured, resources created by other tools (CloudFormation, CDK, Pulumi) that might conflict, and resources that exist in the console but aren't visible through standard API calls.
ops0 handles this with checkpoint-based scanning. If a scan fails halfway through because of a transient API error or rate limiting, it picks up from the last checkpoint instead of starting over. This matters in large accounts where a full rescan takes minutes.
From Discovery to Managed Infrastructure
Generating the Terraform is step one. The real value comes from what happens next.
Once your resources are codified, you can version control them. You can run terraform plan before making changes. You can set up compliance checks that catch violations before deployment. You can detect drift when someone makes a manual change.
ops0 connects Discovery directly to its IaC engine. The generated Terraform feeds into deployment pipelines with compliance gates. The Resource Graph tracks drift between your code and reality. Continuous monitoring shows the deployed infrastructure status and anomalies.
The workflow goes from "we have no idea what's in our AWS account" to "everything is codified, version-controlled, and monitored" in minutes instead of months.
When to Use This Approach
Automated discovery makes sense when you have more than a handful of unmanaged resources, when you need to bring an existing environment under Terraform management quickly, when you're doing a cloud audit and need a complete inventory, or when you're migrating between IaC tools and need a clean starting point.
It doesn't replace understanding your infrastructure. You still need to review the generated code, decide what to manage and what to leave alone, and set up proper state management. But it eliminates the months of grunt work that keeps most teams from ever starting.
Ready to Experience ops0?
See how AI-powered infrastructure management can transform your DevOps workflow.
Get Started
