A CLI tool and library that uses LLMs to generate Infrastructure-as-Code templates, configuration files, and utilities from natural language prompts.
AIAC is an open-source tool that generates Infrastructure-as-Code (IaC) templates, configuration files, and other code artifacts using large language models. It allows developers and DevOps engineers to describe infrastructure needs in natural language (e.g., 'terraform for AWS EC2') and receive ready-to-use code, reducing manual scripting and accelerating deployment pipelines.
DevOps engineers, SREs, and developers who manage cloud infrastructure and seek to automate the creation of IaC templates, CI/CD configurations, and system utilities through AI-assisted code generation.
AIAC stands out by supporting multiple LLM backends (OpenAI, Bedrock, Ollama) from a single tool, offering both interactive and scriptable modes, and providing a clean library API for integration into Go applications, making it versatile for both ad-hoc and automated workflows.
Artificial Intelligence Infrastructure-as-Code Generator.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Supports OpenAI, Amazon Bedrock, and Ollama backends via a configurable TOML file, allowing cost and privacy trade-offs. The README shows examples for defining multiple named backends for different environments.
Generates IaC (Terraform, Pulumi), configuration files (Docker, Kubernetes), CI/CD pipelines, and utilities from simple prompts. Use cases in the README include generating Terraform for EKS or Jenkins pipelines.
Offers an interactive shell for conversing with models and a quiet mode for automated output to files or clipboard. The quiet mode with --clipboard flag enables non-interactive workflows.
Can be used as a Go library (libaiac) for programmatic code generation within applications, extending utility beyond CLI usage. The README provides a code example for starting chats and sending prompts.
Relies on external LLM providers with API keys and internet access, incurring costs and potential downtime. The troubleshooting section notes errors like quota limits and rate throttling from providers.
Since v5, only chat models are supported, and model existence isn't validated, leading to potential errors. The upgrade notes admit that --list-models may show unusable models, and default models must be manually configured.
Setting up multiple backends via TOML files can be complex, especially for users managing different providers or environments. The configuration requires precise settings like API keys, URLs, and backend names.
Generated code quality varies with LLM performance and may not adhere to best practices, necessitating manual review. The README acknowledges that responses can be truncated without clear errors, risking incomplete outputs.