Introduction
Often when working with a new client we ask for their cloud enterprise and solution architecture documents, we consider this the explicit architecture*.
The enterprise architecture documents would include things such as the Cloud Service Catalog, Reference Architectures, and such. The solution architecture document would be the detail design of the workloads deployed to the cloud.
Often a client will not have any enterprise or solution architecture documentation. We can however assess the implicit architecture of the as-built workloads in Azure.
In this article we shall discuss the techniques used to assess the as-build architecture of a cloud subscription and workloads deployed to it
Assumed knowledge
This article assumes that you are already familiar with the Azure CLI. That you have created a development environment, and that you understand Linux commands.
Log Into Cloud
First lets list all the cloud subscription types we have. Many Cloud Consultants have to support both Government and Commercial clients. Here I am login into a US Government cloud.
# login az cloud list --output table az cloud set --name AzureUSGovernment az login -u john.smith@organization.onmicrosoft.com -p Password1!
Select Subscription
First list all the subscriptions and set to the one you want to assess. Often organziation will segment their operating environments into proof of concept (POC), development (DEV), quality assurance (QA), staging (STG), and production (PROD).
# ACCOUNTS az account list -o table az account set --subscription "ACME DEV"
Fetching Data
Often I see scripts that will hit a cloud API each time it needs to pull data. A better approach would be to make a single API call, store the information into a file, then use that to build out lists as needed.
We do this by using the Azure CLI with tab separated value formatting and piping the out to a file.
# Fetch Data az resource list -o tsv > resources.tsv
It is helpful to understand the rough order of magnitude (ROM) the environment is are we dealing with 10’s, 100’s, or 1000’s of resources.
resources=$(wc -l < resources.tsv) echo $resources
Resource Groups
It is important to know how many workloads we have. As we know resources are bundled into resource groups. We will do that by
- reading in the resources file using the Linux cat command
- cut out just the 9th field in that tab separated file
- sort the output
- remove duplicate the results
- write to a new file
- give us the number of groups that were found
# Groups cat resources.tsv | cut -f 9 | sort | uniq > groups.txt groups=$(wc -l < groups.txt) echo $groups
Here we do our first analysis to determine the resource to group ratio. Keep in mind that the environment is going to have some resource groups that the operating environment needs to provide the service offering of vnets. This will skew the results just a bit.
# Groups echo $((resources / groups))
Normally the ratio is low (6-12) for IaaS, higher for PaaS (12-24), and various for SaaS depending on if it uses PaaS services or not as part of its architecture.
-Steven Fowler
Resource Types
Understand the resource types that are used – are they IaaS or PaaS focused.
# TYPES cat resources.tsv | cut -f 12 | sort | uniq > resource_types.txt resource_types=$(wc -l < resource_types.txt) echo $resource_types
Here is a sample out file of resource types for an environment that is IaaS focused.
Microsoft.Cache/Redis Microsoft.Compute/availabilitySets Microsoft.Compute/disks Microsoft.Compute/images Microsoft.Compute/restorePointCollections Microsoft.Compute/snapshots Microsoft.Compute/virtualMachineScaleSets Microsoft.Compute/virtualMachines Microsoft.Compute/virtualMachines/extensions Microsoft.ContainerRegistry/registries Microsoft.DevTestLab/labs Microsoft.KeyVault/vaults Microsoft.Migrate/projects Microsoft.Network/connections Microsoft.Network/expressRouteCircuits Microsoft.Network/loadBalancers Microsoft.Network/localNetworkGateways Microsoft.Network/networkInterfaces Microsoft.Network/networkSecurityGroups Microsoft.Network/networkWatchers Microsoft.Network/networkWatchers/connectionMonitors Microsoft.Network/publicIPAddresses Microsoft.Network/routeFilters Microsoft.Network/routeTables Microsoft.Network/virtualNetworkGateways Microsoft.Network/virtualNetworks Microsoft.OperationalInsights/workspaces Microsoft.OperationsManagement/solutions Microsoft.RecoveryServices/vaults Microsoft.Sql/servers Microsoft.Sql/servers/databases Microsoft.Storage/storageAccounts Microsoft.Web/certificates Microsoft.Web/serverFarms Microsoft.Web/sites
Resources Names
lets gain a better insight into what nomenclature of the resources are being used. We will do that by :
# NAMES cat resources.tsv | cut -f 6 | sort | uniq > resource_names.txt resource_names=$(wc -l < resource_names.txt) echo $resource_names
Fist you will notice that the number of uniqu resource names is less than the number of resources from out first script. This means that we are using duplicate names.
# DUP NAMES cat resources.tsv | cut -f 6 | sort | uniq -d > dup_resource_names.txt dup_resource_names=$(wc -l < dup_resource_names.txt) echo $dup_resource_names more dup_resource_names.txt
I have found that organizations will reuse the NSG names.
This is also a good time to evacuate the nomenclature used for the resource names. Do they follow Microsoft recommended naming conventions, or a specific organizational policy [need blog post on this].
I have found that organization will use one standard, then move to another resulting in some technical debit of inconsistence.
Locations
One last think I do at a high level is to review what cloud locations they are using. Often a person using the portal will mistakingly selecte the wrong azure location when provisioning a resource (this is why IaC is so important). You can review the resource locations liek this
# LOCATIONS cat resources.tsv | cut -f 4 | sort | uniq > locations.txt locations=$(wc -l < locations.txt) echo $locations
His is a typical list for Government organziation operating in DC
usgovarizona usgovtexas usgovvirginia
Note that you should see the paired regions for the services you saw in the resource type analysis.
Digging Deeper
In our next post we shall look at how meta data tagging is being used for proper cloud management.