AVM Contributor Agent Pt 1 — What and Why
A mix of AI and deterministic automation to help AVM contributors
I’m building an agent in the open to help automate open-source contributions for Azure Verified Modules (AVM). It aims to handle the full workflow: writing code, running tests, documenting breaking changes, and opening a draft pull request with verifiable CI results — while keeping the contributor in control.
This is the first post in a series. I’ll cover what it does and why I built it, then in future posts dig into the individual pieces in more detail.
This is a work in progress. The agent-driven design is implemented and working for CLI-based contributions, but there’s still plenty to discover and improve.
Why this exists
Azure Verified Modules is Microsoft’s programme for publishing high-quality, opinionated Terraform (and Bicep) modules for Azure resources. There are now hundreds of modules, each with its own upstream repository, its own set of open issues, and at different stages of the AVM journey.
As an external contributor, I can’t run tests against Microsoft’s internal CI, and while the official guidance suggests running tests locally, evidencing this is difficult.
There’s also a significant effort required to align the hundreds of AVM modules
with the v1.0 roadmap. Some of this work involves breaking changes, like the
recent recommendation to align variable names with the REST API specification.
Changes this substantial typically require an accompanying UPGRADE.md file,
adding to the contributor’s workload.
The idea
What if an agent could handle the development tasks, review its own work against AVM guidance, and then run real CI tests deterministically to prove the fix?
The key is using AI frugally: only for the parts that require judgement (reading an issue, making a change, reviewing it), and then relying on deterministic CI to prove whether the change actually works.
In other words: AI proposes, a second agent reviews, and CI verifies.
It took a few iterations to land here. I explored Microsoft Conductor, Brady Squads, and GitHub Action agentic workflows. I plan to reuse some of those ideas in later parts of the series, but this is the pattern that’s working for end-to-end contributions today.
How it works
The pipeline is built on the Microsoft Agent Framework in Python, which can optionally run locally, or from Azure AI Foundry.
There are two agents and a CI dispatch loop:
Developer agent reads the GitHub issue, uses the module’s own AVM guidance as a “skill” to understand context, and implements the fix.
Reviewer agent inspects the proposed changes before they’re pushed. It applies both static AVM review criteria and module-specific context, then either approves the change or rejects it with specific feedback. The Developer agent gets up to three attempts to address the feedback before the pipeline escalates for human review.
CI dispatch — when the Reviewer approves, the pipeline triggers a test run in a companion repository (
kewalaka/avm-contributions) via a GitHubrepository_dispatchevent. That repo runs the AVM checks, end-to-end tests, and (when warranted) upgrade-path testing to catch breaking changes. The pipeline polls for a structuredsummary.jsonartifact to get the results.
When CI is green, a draft pull request is opened on the upstream module repository with the CI evidence embedded in the PR body.
Three ways to start
The pipeline has three entry modes to fit different situations:
| Mode | How to trigger | What happens |
|---|---|---|
issue-driven | --issue N | Forks the upstream repo, syncs the fork, clones, branches, and implements from scratch |
existing-pr | --pr N | Fetches the PR’s head branch, reads feedback, creates a new agent-compliant branch, and continues from the current state |
existing-repo | --existing-repo PATH | Clones a local checkout to an isolated workspace and continues from there |
The most interesting for the automation case is issue-driven. You hand it an upstream repo and an issue number and it does the rest.
What is working
The core pipeline runs end-to-end, with the current focus on the CLI.
Given an upstream repo and issue number, it will:
- Fork the repo under your GitHub account (or skip if the fork already exists)
- Sync the fork’s default branch with upstream
- Clone into an isolated workspace under
~/.tfdev/ws/<run_id>/ - Run
./avm pre-committo align the module and generate the Developer’s skill file - Create a guardrailed branch (
agent/issue-<N>-<slug>-<run_id[:6]>) - Implement the fix using the Developer agent
- Gate the push through the Reviewer agent
- Dispatch CI and poll for results
- Open a draft PR with evidence when CI passes
The existing-pr and existing-repo modes are also working, which is useful
for continuing from a half-finished fix rather than starting from scratch.
The intention is to switch to a hosted model on Foundry. The foundation is in place, but it needs more work, which will be the subject of future posts.
Security and guardrails
Since the pipeline operates on real repositories and can push code and open pull requests, the guardrails matter.
Push is blocked unless all of the following hold:
- Branch name matches
^agent/(issue-\d+|manual)-[a-z0-9-]+$— no chance of accidentally pushing tomain - Remote
originowner matches the configured fork owner — the pipeline can never push toAzure/* - No force-push
- Every agent commit carries an
Agent-Run-Id:trailer for traceability - Workspace must be under
~/.tfdev/ws/
A separate fine-grained PAT (AGENT_DISPATCH_TOKEN) is scoped only to the avm-contributions CI repository and is used exclusively for dispatching test runs.
What’s next — in this series
There is plenty more to cover. Here is what I’m thinking for subsequent posts, subject to change(!):
- Part 2 — CI Dispatch: How
kewalaka/avm-contributionsruns the actual tests. Therepository_dispatchflow, the three test workflows (checks, end-to-end, upgrade), and how structuredsummary.jsonartifacts carry results back to the pipeline. - Part 3 — The Agents: How it loads the module’s AVM skill, what the Developer and Reviewer agents each do, and how the maker/checker feedback loop works (including stop conditions).
- Part 4 — Hosted Updates: Taking the agent pipeline to the cloud with a longer-running service model on Azure Container Apps for efficient, scalable infrastructure.
- Part 5 — Futures: What’s next? We’ll look at ideas like a GitHub Actions agent for automated issue triage, support for Bicep, and a web UI for managing multiple contributions in-flight.
The code
The project is open-source on GitHub: kewalaka/avm-contributor-agent.
It’s genuinely a work in progress — I’ve been through several concept designs to land here, but there are rough edges. If you’re interested in AVM, AI agents, or both, take a look and feel free to open an issue.