Secure your TFState using Entra ID
Use RBAC for your Azure Terraform Backend
Are you using Azure RBAC to access your Azure Terraform state?
Are you sure 😉?
Have you tried turning off access keys & does everything still work?
Hopefully it’s obvious why storage account access keys should be avoided, but just in case - they are long-lived tokens that provide full access to your storage account, without requiring the need to authenticate to your Azure tenancy. SAS tokens are constrained, so a bit better, but Entra ID is the way.
It’s unfortunately common for the Terraform principal to be given overly broad permissions to the storage account where the state is. If it has Storage Account Contributor (or equivalent), then it may be dynamically switching to access keys without you knowing.
Use “AzureAD” Auth
I mean, Entra ID.. #ahem#.
To have Terraform authenticate using RBAC with the AzureRM backend, you need, either:
- The flag ‘use_azuread_auth = true’ set in your backend:
terraform { backend “azurerm” { use_azuread_auth = true } }
- ..or, the equivalent environment variable set
|
|
|
|
- You additionally need to grant the terraform deployment principal permission to the container containing the terraform state (more on this later).
If you do not have the backend flag set, then the Azure Storage account may be playing a trick on you.
Assuming default settings, if you use an account with Contributor/Storage Account Contributor rights, and browse into a storage container the portal helpfully exchanges your Entra token for an access key:

The default authentication method uses an “access key” to view data within the container. This is why you can see the contents of containers in the portal even without granting a data plane permission (e.g. “Storage Account Data XYZ”) to your account.
Thing is, Terraform will do the same.
Illustrating the behaviour
Below is a little code snippet that will write to the Terraform state file:
|
|
To mimic the behaviour we see above, I’ve granted my deployment principal a Contributor-scoped role (i.e. any that include permission to list keys), and turned off “use_azuread_auth” in the backend:

Terraform initialises fine even with AD auth off. Yes, I know, backend settings are usually in environment variables, I have done it like this for demo purposes.
With use_azuread_auth
not set, or set to false, Terraform is helpfully switching its deployment identity for an access key, just like the portal does for a user.
Let’s remove those Contributor permissions, there is a better way.
Making things better
Microsoft recommends removing the ability to use shared access keys if not required, quoting the Well-Architected Framework:
Avoid and prevent using Shared Key authorization to access storage accounts. It’s recommended to use Microsoft Entra ID to authorize requests to Azure Storage and to prevent Shared Key Authorization.
You can do this via the portal by following the link to “Storage account key access” and setting it to Disabled.


Now, if we try again:

Key based authentication is not permitted on this storage account.
So, what do we need to do to fix this?
- The Terraform deployment principal needs
Storage Blob Data Contributor
on the storage container where the state file is located (not the storage account, the container within it). - We need to make sure
use_azuread_auth
is set:
|
|
Now, let re-run terraform init --reconfigure
:

“–reconfigure” isn’t usually needed, but is for the demo because the backend settings have changed.
As an aside, the Hashicorp docs say that “Storage Blob Data Owner” is required, but I’m yet to find a scenario where that is needed (comments welcome).
For closure, we run terrafom apply to take it all the way:

Closing confusions
There is another setting that controls RBAC access to storage in the AzureRM provider:
|
|
The above isn’t needed for the backend.
The above is for when you have storage-based azurerm resources in Terraform (such as blobs, queues, tables), and you want Terraform to interact with those using Entra ID authentication.
Be aware that this option still has some limitations such as not being able to interact with the Files API (see Docs overview | hashicorp/azurerm | Terraform | Terraform Registry, and even more if you are using earlier versions of AzureRM v3.
Behaviour in the portal
With default settings, the same behaviour is observed in the portal when browsing to a container:

Assuming you have granted yourself a Data Plane permission (like Data Reader, Data Contributor), you can “switch” as illustrated. Note that even the Owner account does not have data plane permissions (a topic for another day), so you must grant a role via Access Control / IAM, the same as you have for the Terraform principal.
You can save yourself some extra clicks by changing the default authentication via storage account settings:

Takeaways
- Turn off the option to use shared keys on your Terraform storage account backends. In fact, turn it off for all storage accounts, and only enable it if a service requires it!

- Make sure you have
useazureadauth = true
set in your terraform backend or equivalent environment variable. - The least-privilege permission required by the terraform deployment principal is Storage Account Data Contributor scoped to the storage container (or maybe Storage Account Data Owner, if you believe the docs).
- The AzureRM provider setting
storage_use_azuread
isn’t needed to interact with the backend, but is needed if you want to use Entra ID when interacting with other storage resources in the data plane. - Always prefer Entra ID authentication over access keys for any service where the option is available.
Try it yourself
If you want to try it out, here’s a gist that’ll help make the resources and set up the testing:
A test case to explore access keys with Terraform
Refs: Hashicorp: Backend Type: azurerm | Terraform | HashiCorp Developer Microsoft WAF guidance regarding storage accounts: Storage Accounts and security - Microsoft Azure Well-Architected Framework | Microsoft Learn