Tags:
Introduction to Automating Azure Using PowerShell
Welcome to Part Two of a three-part series on automating Azure security. In the first post, we covered how to identify and evaluate Azure security risks using PowerShell automation. Using cloud-native capabilities, PowerShell can be used for automated reporting to ensure actionable data is presented to stakeholders on a regular basis for constant, accurate understanding of security posture in Azure.
In this post, we will discuss automation approaches to mitigating risks identified in Part 1 of the series. While there are many ways to achieve a strong security posture in the cloud, the two methods that will be discussed in this post are redeploying resources using ARM/BICEP templates and Azure Policy. ARM/Bicep templates are Infrastructure as Code (IaC) style deployment options for controlling aspects of cloud services including configuration options and security controls. Azure Policy is a native service to configure and ensure …you guessed it… policy compliance across cloud resources.
Setup Information
To follow along, you’ll need the following:
- An Azure Tenant with resources deployed
- PowerShell
- Az Module
NOTE: This blog post will walk through a fictitious cloud resource group with a resource that is currently misconfigured and identified as risky. Please fully test all commands and techniques prior to deploying to production infrastructure as the changes below may inhibit desired functionality depending on the use cases of deployed resources.
Option 1 - Infrastructure as Code
One of the great enabling features of modern cloud platforms is the ability to define infrastructure as code. Rather than logging into a GUI and button-clicking our way to a production environment, we can simply define our infrastructure using a standard, human-readable format such as JSON or YAML. These files can be managed in source control to ensure proper versioning, change control, and even release capabilities to deploy infrastructure in true CI/CD fashion. IaC approaches help reduce drift because there is a clear, documented standard configuration. Releasing that configuration via pipelines ensures that the standard is reapplied if drift occurs between releases. Another wonderful aspect of this technology is idempotence. From an automation standpoint, rather than several ‘if/else’ statements to first check the state of a system to see if changes are needed, we simply define what the desired state should look like. The underlying engine that processes the configuration file can then perform that logic and simply do nothing if the target is already in the correct configuration.
Vendor agnostic solutions such as Terraform offer excellent capabilities to use infrastructure as code, while abstracting away the underlying cloud platform. Such capabilities make these tools well-suited for multi-cloud deployments.
In Azure, there are two popular Azure-native technologies supporting this methodology, Azure Resource Management (ARM) and Bicep. ARM templates are JSON-based and are certainly human-readable, but the format is slightly more complex. Bicep, the newer of the two, provides an even simpler format for defining infrastructure. Under the hood, Bicep files are transpiled to ARM templates prior to deployment, as Azure deployments still consume ARM format. Both options are fully supported and provide great infrastructure as code options.
With respect to the development environment, Bicep offers superior experience for users working in VS Code. Syntax is simple and extremely easy to comprehend, and Microsoft publishes a VS Code extension allowing for ease of development. The extension allows for decompiling ARM to Bicep for editing as well as compiling Bicep to ARM for deployment. Popular CI/CD vendors also provide features for automating this functionality.
For an existing deployment with known vulnerabilities, a resource or resource group can be exported to its template format. This can be done from Azure Portal, or from PowerShell.Looking back at part 1, let’s consider the ‘throwaway-resources’ resource group, which contains a deliberately vulnerable storage account with public access enabled.
First, let’s use PowerShell to connect to Azure and export a template representing the current configuration of the resource group and resources:
Connect-AzAccount
Set-AzContext -Context (Get-AzSubscription -SubscriptionName 'throwaway')
Export-AzResourceGroup -ResourceGroupName 'throwaway-resources' -Path C:\temp\throwaway_template.json
We’ll review any warnings and review the template itself to ensure everything looks good. Notice that the resulting JSON file is readable and could be easily edited itself.
Using the Bicep VS Code extension, this JSON format can be easily decompiled into Bicep format via the command palette option, Bicep: Decompile into Bicep, producing a new code tab with a .bicep file.
The resulting resource definition is cleaner and easy to read:
From here, we can simply edit the file, changing the following two settings for demonstration purposes:
allowBlobPublicAcess: false
supportsHttpsTrafficOnly: true
Now we can save the file as it is ready for deployment. Certainly, it is appropriate to review all settings and ensure proper configuration for production workloads. Before we can deploy the resource group, we must overcome one limitation and install the local Bicep CLI or convert the file back to JSON for deployment using the VS Code extension.
With the template ready, it can be deployed via PowerShell.The New-AzResourceGroupDeployment cmdlet provides excellent support for PowerShell’s well-known ‘-WhatIf’ parameter, which produces a report showing what will change because of the deployment.
New-AzResourceGroupDeployment -Name 'Fix_Blob_Vulns' -ResourceGroupName 'throwaway-resources' -TemplateFile C:\Temp\throwaway_template.json -WhatIf
Since these changes look like the intended results, we can now deploy the resource group by simply removing the -WhatIf parameter.
Looking in Azure Portal, the two settings have indeed been corrected.
This approach can be used to build parameterized templates for future cloud deployments, using Infrastructure as Code to produce repeatable results. To improve on the above, managing templates in source control and using pipelines to automate releases will provide an excellent starting point for deploying cloud infrastructure with proper configurations.
Option 2 - Azure Policy
While manually updating and using PowerShell to automate template deployments works, it may not scale for larger environments. Instead of touching each resource/resource group, we may want to create standard policies with which resources of given types must comply. Azure Policy gives us the ability to standardize configurations across any new deployment of a type of resource, as well as the ability to remediate existing resources that are already deployed.
Azure Policy is not only limited to security functionality. It supports operational aspects of cloud usage, such as resource tagging, deployment regions, resource types, and more to manage other considerations. From a security perspective, it allows for standard configurations across the myriad of resource types and options in Azure.
With nearly 3,000 available policy definitions at the time of this writing, it’s safe to assume that many common security best practices can be implemented in this manner. Custom policy definitions can be created for additional use cases. Multiple policy definitions can be combined together into an Azure term called Initiatives (simply a set of policy definitions), and Microsoft provides several out of the box Initiatives as well. In this section, we will create a new Initiative to control the public access and HTTPS settings across any Blob storage within our subscription. This way, if an engineer makes a mistake and deploys a misconfigured storage account, the initiative will ensure there is no unnecessary exposure.
Creating an Azure Policy Initiative
The first step is to look for policy definitions that already suit the use case. Policy definitions generally have verbose and descriptive names, so we can search for terms of interest. This can be done within Azure Portal, or inside a PowerShell session using Get-AzPolicyDefinition.This cmdlet returns objects representing existing definitions, and a Properties property on the returned object includes the display name of the policy. By default, the return objects do not have much meaningful information in the properties displayed in the standard output format. However, we can expand the property named Properties for more meaningful understanding, looking for any policies with a DisplayName including the term ‘storage’ followed by the term ‘public’:
Get-AzPolicyDefinition | Where-Object {$_.Properties.DisplayName -like "*storage*public*"} | Select-Object -ExpandProperty Properties
With four possible options, we can find the policy of interest and store it in a PowerShell variable:
$publicBlobPolicy = Get-AzPolicyDefinition | Where-Object {$_.Properties.DisplayName -eq "Configure your Storage account public access to be disallowed"}
We’ll do the same thing for the secure transfer policy:
$secureTransferPolicy = Get-AzPolicyDefinition | Where-Object {$_.Properties.DisplayName -eq "Configure secure transfer of data on a storage account"}
To create a policy definition, we need a JSON file including an array of policyDefinitionId objects. We can create the proper structure using the ConvertTo-Json cmdlet. Note: we’re showing this method to avoid manual creation of the JSON file and providing an easy way to support adding multiple policy definitions based on search criteria.
@($publicBlobPolicy,$secureTransferPolicy) | ForEach-Object {$_ | Select-Object -Property PolicyDefinitionId} | ConvertTo-Json | Out-File -FilePath 'C:\temp\blob_policy.json'
With the policy set contained in a JSON file, we can create a new Initiative using New-AzPolicySetDefinition.Specify the location of the JSON file along with a name and display name for supportability.
New-AzPolicySetDefinition -Name 'blob-lockdown' -DisplayName 'Blob - Disallow Public Access and Force HTTPS' -PolicyDefinition C:\temp\blob_policy.json
The policy initiative is now visible within Azure Portal:
Lastly, we need to assign the policy set definition to a target, in this case our ‘throwaway-resources’ resource group.Initiatives can also be targeted to entire subscriptions or management groups to scope these changes according to a specific use case.
First, store the newly created initiative in a PowerShell variable:
$policySet = Get-AzPolicySetDefinition -Name 'blob-lockdown'
With the $policySet variable populated, create a new policy assignment using New-AzPolicyAssignment, providing a name and display name, scope to apply the policy definitions, policy set, location, and managed identity. When setting the IdentityType parameter to “SystemAssigned”, a new Managed Identity will be automatically created with the same name as our policy assignment.
New-AzPolicyAssignment -Name 'Lock down storage accounts' -DisplayName 'Ensure no blob public access and HTTPS enforcement on storage accounts' -Scope "/subscriptions/$(Get-AzSubscription -SubscriptionName 'throwaway' | Select-Object -ExpandProperty Id)" -PolicySetDefinition $policySet -Location 'eastus' -IdentityType 'SystemAssigned'
At this point, policy will be applied on new deployments. Regardless of what settings are chosen during deployment, the resource will end up in the desired configuration.
Remediating Existing Resources
For automated remediation tasks, we’ll need to update the Managed Identity’s role to have access to modify existing resources. The role(s) needed will depend on the policy in question. In this case, we know that the Managed Identity needs Contributor permissions on Storage Accounts to update settings. We can find relevant role definitions by searching based on name.
Get-AzRoleDefinition | Where-Object {$_.Name -like "*Storage*Contributor"} | Select-Object -Property Name,Description,Id
With this understanding, assign the role to the Managed Identity:
New-AzRoleAssignment -ObjectId (Get-AzADServicePrincipal -SearchString "Lock down storage accounts").Id -RoleDefinitionName "Storage Account Contributor" -Scope "/subscriptions/$(Get-AzSubscription -SubscriptionName 'throwaway' | Select-Object -ExpandProperty Id)"
Now, we can see that this existing storage account is not meeting compliance with the new initiative (it was manually configured to its vulnerable state after step 1 of this blog post).
If we need to mitigate existing non-compliant resources, we can use the Start-AzPolicyRemediation cmdlet to start a remediation task for a given policy assignment. However, since we have created an initiative with two policies, we’ll need to create remediations for each within the policy set.
First, get the ID of the previous policy assignment:
$policyAssignment = Get-AzPolicyAssignment -Name 'Lock down storage accounts' -WarningAction SilentlyContinue
Next, get the policy definitions for policies that are currently showing non-compliant. Using the Get-AzPolicyState cmdlet, we get similar output as to what is shown in the Azure Portal screenshot above. In the output object, a property named ComplianceState shows if the state is non-complaint, and a property named PolicyDefinitionReferenceId includes the reference needed to specify the specific policy definition needing remediation.
$policyStates = Get-AzPolicyState | Where-Object {$_.PolicyAssignmentId -eq $policyAssignment.PolicyAssignmentId -and $_.ComplianceState -eq 'NonCompliant'}
Once we have the policies which are not compliant, we can iterate through each object and start a remediation job for the respective PolicyDefinitionReferenceId.
$policyStates | ForEach-Object {Start-AzPolicyRemediation -Name "$($_.PolicyDefinitionName)_Remediation" -PolicyAssignmentId $policyAssignment.PolicyAssignmentId -PolicyDefinitionReferenceId $_.PolicyDefinitionReferenceId }
Remediation jobs have started and resources will be updated to reflect the correct settings.These jobs can be viewed in Azure Portal and can be monitored for any errors.
Additionally, looking at the affected resource(s), we can manually validate that settings have been corrected:
By default, Azure Policy will reevaluate resources once per day. For those of you as impatient as I am, Start-AzPolicyComplianceScan can scratch that itch to understand immediate impact by evaluating resources against existing policies. Note that compliance scans take some time, so now may be a good time for a coffee refill or go and remediate other findings.
Once completed, simply run Get-AzPolicyState to show that findings have been remediated. Notice that the IsCompliant property now shows True for both policies.
Wrapping Up
In Part Two of this series, we have covered remediation of vulnerabilities and misconfigurations discovered in Part One. From an Infrastructure as Code perspective, ARM and Bicep provide excellent, readable formats that can be managed in source control and automatically deployed to provision cloud resources. Providing secure, working templates to teams looking to embrace the cloud is a great strategy for ensuring safe and enjoyable experimentation, without needing to go back and secure things after the fact. Furthermore, Azure Policy provides an excellent mechanism for globally (or more tactically) setting those configuration options that are critical to security posture. Both approaches have strong PowerShell support and can scale with the needs of modern organizations.
Intimated by the PowerShell syntax?
If automation is important to you, and PowerShell seems out of reach, don’t worry! SEC586 – Security Automation with PowerShell covers PowerShell from the ground up, using Blue Team use cases like the above to ensure not just an academic understanding of the syntax, but practical hands-on capabilities. Students with no coding background have attended SEC586 and left with the ability to automate everything from host-based analytics to cloud integrations. Seasoned professionals with PowerShell experience attending SEC586 can also be confident they will leave with an even deeper understanding and new skills to automate their security programs.