My WordPress Blog

Remote Site R.Config with explanations

no service pad The packet assembler/disassembler (PAD) service supports X.25 links. This service is on by default, but it is not needed unless your router is using X.25.
  • Service timestamps debug/log uptime. // Log refers to syslog and debug is for debug output. The chosen option here was the syntax “uptime” which configures the debug and syslog output to also show the time since the device was booted. The other option would be ” service timestamps debug/log datetime“. Datetime configures the chosen option either log/debug to show output including the real local time / year / msecs this option isn’t chosen here though. Below is an example of the output:
“service timestamps log uptime”

Service password-encryption – Allows you to encrypt all passwords on your router so they cannot be easily guessed from your running-config. This command uses a very weak encryption because the router has to very quickly decode the passwords for its operation.

Boot-start/end-marker – The boot-start-marker and boot-end-marker flags, which can be seen in Cisco IOS software configuration files, are not CLI commands. These markers are written to configuration files automatically to flag the beginning and end of the boot commands (boot statements). By flagging boot statements, these markers allow the router to more reliably load Cisco IOS images during bootup

Logging buffered – See below for Cisco documentation.

Enable secret 5 – Sets an encrypted password for enable mode.

Enable password 7 – Enable secret takes precedence as it’s more secure:


aaa new-model – To enable AAA, you need to configure the aaa new-model command in global configuration.

aaa group server radius rad_eap + server – Look below for cisco documentation and explanation.

aaa authentication login userlist local

Login Authentication

You can use the aaa authentication login command to authenticate users who want exec access into the access server (tty, vty, console and aux).

Example 1: Exec Access with Radius then Local

Router(config)#aaa authentication login default group radius local

In the previous command:
  • The named list is the default one (default).
  • There are two authentication methods (group radius and local).

All users are authenticated with the Radius server (the first method). If the Radius server does not respond, then the router local database is used (the second method). For local authentication, define the username name and password:

Router(config)#username xxx password yyy

Because the list default in the aaa authentication login command is used, login authentication is automatically applied for all login connections (such as tty, vty, console and aux).




Corporate network is on

Public network is on (I think /16 could be /20)

Task was to configure Meraki MR76 to advertise both the corporate network (on native vlan of 20) and the public network on vlan 45. Meraki requires full cloud connectivity to pull its config, which requires external connectivity i.e., Internet access. Below is a brief outline of things I tried:

  • Configuring the port, the Meraki connected to as a trunk, allowing all vlans and setting the native trunk as 20 (corporate vlan) This allowed the Meraki to receive a DHCP address within the 172.31/16 range HOWEVER it did not have connectivity to the cloud. It did however have full corporate connectivity, tested by pinging edge routers ingress interfaces. HTTP traffic leaving the corp. network requires the use of a proxy, as there are no NAT rules for a scope of addresses in 172.31/16 (Meraki included) only a single static NAT rule for traffic sourced from the proxy server (this is key to finding the solution).
  • Similar to above, I had also tried configuring the trunk’s native vlan to 45 (public/visitor vlan) which isn’t governed by the proxy NOR the firewall, it traverses through a “BT Managed Hub” before entering the WAN. I had full internet connectivity and access to the Meraki dashboard, which was great, until I realised the Meraki was placed in the network which is not the network I need it to advertise. I would imagine there is PAT or dynamic NAT configured on this hub to translate addresses in the network.
  • Configuring the port as an access-port in both vlan 20 and 45 (corp. and public). This essentially had the same outcome as configuring trunk ports.
  • Adding to an existing firewall zone-based policy to permit cloud traffic from the Meraki to the Meraki public cloud. I still believe this was required although I haven’t had the ability to check if this was required or not, as I had configured the additional class map BEFORE I had found the final solution. Within the Wiki is where you’ll find the zone-fw config.

Class-map applied to overall policy-map “INSIDE-TO-OUTSIDE-POLICY”

  • I decided to look, once again, at the running config of the Cisco 4351 (our edge router/firewall) which is where WAN traffic traverses. Because my issue was essentially Internet connectivity on vlan 20 (corporate) I homed in on the NAT config. This is what I saw:
Show ip nat translations – shows the current binding between static LAN addresses and their corresponding public address.

Now, as you can see that’s not a lot of static routes for a company that hosts 1000s of network devices and PAT or dynamic NAT was not configured. Then it hit me, THE PROXY SERVER!! two of these addresses you see above are actually statically assigned to translate the source address of our proxy servers which account for 100s of devices. The Meraki will not hit the proxy server, and therefore NOT have its source address changed, for cloud communication it requires TCP, UDP and ICMP NOT HTTP(s).

Because the Meraki requires cloud connectivity to pull its config, I was tasked with finding out the address it was handed out by DHCP – look below:

DHCP administrator tools

It is named “meraki ap” because I recently changed it. I then created a static route from the ap to a public address:

Confirmed meraki was receiving a static nat entry

Added an extra SSID on the meraki for public wifi using vlan 45 within the network for unfiltered guest traffic also showcasing (on the left) the native config using vlan 20:

See the top right for “android-5” (my phone) using the public wifi.

More still to be done, TBC….

Configuring and Applying Crypto Maps

Last Updated on Mon, 22 Aug 2022 | IPSEC

After configuring crypto access lists and transform sets, you can add them to a crypto map.

Consider the network in Figure 7-12 with two routers that peer over an untrustcd network. Assume that IKJi, crypto access lists, and transform sets are configured and a crypto map is now needed.

Figure 7-12 A Network with a Basic Crypto Map Configuration

San Francisco

San Francisco

Figure 7-12 A Network with a Basic Crypto Map Configuration

Crypto Maap

New York s1:

MAP-TO-NY (crypto map)

MAP-TO-SF (crypto map)

New York s1:

MAP-TO-NY (crypto map)

MAP-TO-SF (crypto map)

In the preceding diagram, Router A’s serial interface to the untrusted network is

A crypto map named MAP-TO-NY is applied to this interface (the configuration commands follow). Likewise, Router B’s serial interface is and has a crypto map called MAP-TO-SF.

The following commands create a crypto map on Router A (for clarity, the context of the IOS prompt is included):

RTA#conf t

Enter configuration commands, one per line. End with CNTL/Z. RTA(config)#crypto map MAP-TO-NY 20 ipsec-isakmp RTA(config-crypto-map)#match address 101

RTA(config-crypto-map)#set transform-set TRANS-ESP TRANS-AH-ESP

RTA(config-crypto-map)#set peer

RTA(config crypto-map)#exit

RTA(config)#int si

RTA(config-if)#crypto map MAP-TO-NY

The command crypto map MAP-TO-NY 20 ipsec-isakmp creates a crypto map entry with a sequence of 20 for a crypto map called MAP-TO-NY (the crypto map is created when its first entry is created ). Although this example contains just one entry, crypto maps may contain multiple entries to designate multiple peers, transform sets, and access lists. The sequence number prioritizes the crypto map entries. As the router compares packets to the crypto map, it examines entries in the order of their sequence number (lower sequence numbers are examined first). For this example, a sequence of 20 was chosen so that future entries may be placed before or after this entry. The keyword ipsec-isakmp indicates that IKE is used to manage the SAs for this entry.

IOTE In addition to IKE, which is specified by the ipsec-isakmp keyword, ciypto maps support two other options: ipsec-manual (IPsec without IKE) and cisco (Cisco’s pre-IPsec encryption feature called Cisco Encryption Technology, or CET). Consult the IOS documentation for configuring ipsec-manual or cisco.

The command match address 101 assigns crypto access lisl 101 to this entry. Outbound packets that match this list are protected with IPsec. Inbound packets that match the reverse logic of the list are expected to be protected.

The command set transform-set TRANS-ESP TRANS-AH-ESP defines the transform sets that are acceptable for protecting the traffic covered by the crypto access list. When negotiating IPsec SAs with the remote peer (Router B), the router proposes transform sets in the order listed by this command (this router’s first choice is the transform set TRANS-ESP). Router A and Router B must agree to use a common transform set (a common set of protocols and algorithms) before an SA can be established. TRANS-ESP and TRANS-AH-ESP are the names of transform sets previously created by the crypto ipsec transform-set command. The transform set names (TRANS-ESP, TRANS-AH-ESP) are locally significant and do not have to be the same on both routers.

The command set peer defines the remote peer, Router B, with which this router builds the IPsec S A and to which it subsequently sends the protected traffic. Multiple peers can be configured by repeating the set peer command. This provides a level of redundancy for when SAs are established: If the first peer is not reachable, the router attempts to establish the SA with the next peer in the entry.

The interface configuration command crypto map MAP-TO-NY applies the crypto map to the router’s Serial 1 interface (selected by the command int si). Like access lists, crypto maps do not do anything until you apply them to an interface. The proper place to apply the crypto map is the interface where the protected traffic exits the router: the interface that points in the direction of the remote peer. In this example. Router A’s Serial 1 interface is the exit point (refer to Figure 7-12).

The following is the corresponding configuration on Router B (only the relevant crypto map lines are shown):

RTB#sh run

Current configuration: hostname RTB

<lines deleted for brevity> I

crypto map MAP-TO-SF 20 ipsec-isakmp match address 102

set transform-set B-TRANS1 B-TRANS2 set peer

interface Seriall ip address crypto map MAP-TO-SF

The crypto access list 102 must be a mirror image of list 101 on Router A, and at least one of the transform sets (B-TRANS1 or B-TRANS2) must match one of Router A’s transform sets (TRANS-ESP and TRANS-AH-ESP). A match means the transform sets share the same protocols (AH, ESP) and algorithms (DES or MD5, for example).

NOTE Crypto access lists arc crypto map elements and interoperate with regular packet-filtering access lists that might exist on an interface. Packets blocked by regular access lists are not processed by IPsec.

Continue reading here: Configuring IPsec SA Lifetimes

Cisco 4451 Capture Monitor

To configure a monitor capture specifying an access list or a class map as the core filter for the packet capture, use the monitor capture command in privileged EXEC mode. To disable the monitor capture with the specified access list or class map as the core filter, use the no form of this command.

monitor capture capture-name {access-list access-list-name | class-map class-map-name}

no monitor capture capture-name {access-list access-list-name | class-map class-map-name}


The following example shows how to define a core system filter using an existing access control list:

Device> enable
Device# configure terminal
Device(config)# ip access-list standard acl1
Device(config-std-nacl)# permit any
Device(config-std-nacl)# exit
Device(config)# exit
Device# monitor capture mycap access-list acl1
Device# end

The following example shows how to define a core system filter using an existing class map:

Device> enable
Device# configure terminal
Device(config)# ip access-list standard acl1
Device(config-std-nacl)# permit any
Device(config-std-nacl)# exit
Device(config)# class-map match-all cmap
Device(config-cmap)# match access-group name acl
Device(config-cmap)# exit
Device(config)# exit
Device# monitor capture mycap class-map classmap1
Device# end

Monitor Capture (interface/control plane)

To configure monitor capture specifying an attachment point and the packet flow direction, use the monitor capture command in privileged EXEC mode. To disable the monitor capture with the specified attachment point and the packet flow direction, use the no form of this command.

monitor capture capture-name { interface type number | control-plane } { in | out | both }

no monitor capture capture-name { interface type number | control-plane } { in | out | both }


The following example shows how to add an attachment point to an interface:

Device> enable
Device# monitor capture mycap interface GigabitEthernet 0/0/1 in
Device# end

The following example shows how to add an attachment point to a control plane:

Device> enable
Device# monitor capture mycap control-plane out 
Device# end

monitor capture clear
To clear the contents of a packet capture buffer, use the monitor capture clear command in privileged EXEC mode.

monitor capture capture-name clear

The following example shows how to set a filter for IPv4 traffic:

Device> enable
Device# monitor capture match mycap ipv4
Device# end

monitor capture start

To start the capture of packet data at a traffic trace point into a buffer, use the monitor capture start command in privileged EXEC mode.

monitor capture capture-name start

monitor capture stop

To stop the capture of packet data at a traffic trace point, use the monitor capture stop command in privileged EXEC mode.

monitor capture capture-name stop

show monitor captureDisplays packet capture details.

Cisco Zone-Based Firewall Policy for Meraki Wireless AP

Zone-based firewall Zone-based firewall is an advanced method of stateful firewall. In stateful firewall, an entry containing source IP address, destination IP address, source Port number and destination Port number, is maintained for the traffic generated by the trusted (private) network in the stateful database. This will only the traffic including the replies for the private (trusted) network using the stateful database. 

Zone-based Firewall procedure:

  1. Create zones and assign an interface to it – In Zone-based firewall, logical zones are created. A zone is assigned to an interface. By default, traffic from one zone to another is not allowed.
  2. Create class-map – After creating a zone, a class-map policy is made which will identify the type of traffic, like ICMP, on which the policies will be applied.
  3. Create policy-map and assign class-map to the policy-map – After identifying the type of traffic in class-map, we have to define what action must be taken on the traffic. The action can be:
    • Inspect: It is same as inspection of CBAC i.e only that traffic will be allowed from the outside network which will be inspected (return traffic of inside (trusted) network.
    • Drop: This is the default action for all traffic. The class-map configured in a policy map can be configured to drop unwanted traffic.
    • Pass: This will allow the traffic from one zone to another. Unlike inspect action, it will not create a session state for a traffic. If we want to allow traffic from the opposite direction, corresponding policy should be created.

The below are the configuration tasks that you need to follow:

  1. Configure Zones.
  2. Assign Router Interfaces to zones.
  3. Create Zone Pairs.
  4. Configure Interzone Access Policy (Class Maps & Policy Maps)
  5. Apply Policy Maps to Zone Pairs.

Task 1 : Configure Zones

zone security INSIDE

Task 2 : Assign Router Interfaces to Zones

interface GigabitEthernet0/0/1

zone-member security INSIDE

Task 3 : Create Zone Pairs

Zone pairs are created to connect the zones. If you want to make two zones to communicate you have to create Zone pairs. In our scenario the traffic flows between :


Task 4 : Configure Interzone Access Policy

Class map sort the traffic based on the following criteria :

1.) Access-group

2.) Protocol

3.) A subordinate class map.

So first we need to create an ACL and associate it with the class map.

ip access-list extended OUTBOUND-INSIDE-MERAKI-MGMT

 remark Next 24 lines – Meraki-Mgmt

permit udp host eq 7351

permit udp host eq 9350

permit udp eq 7351

permit udp eq 9350

permit udp eq 7351

permit udp eq 9350

permit udp eq 7351

permit udp eq 9350

permit tcp host eq 80

permit tcp host eq 443

permit tcp host eq 7734

permit tcp host eq 7752

permit tcp eq 80

permit tcp eq 443

permit tcp eq 7734

permit tcp eq 7752

permit tcp eq 80

permit tcp 443

permit tcp eq 7734

permit tcp eq 7752

permit udp any eq 123

permit udp host eq 53

permit icmp host

permit icmp

class-map type inspect match-any OUTBOUND-INSIDE-MERAKI-MGMT

match access-group name OUTBOUND-INSIDE-MERAKI-MGMT

match protocol tcp

match protocol udp

match protocol icmp

Task 5: Policy-Map Configuration

policy-map type inspect INSIDE-TO-OUTSIDE-POLICY



Task 6 : Apply policy maps to zone pairs

zone-pair security ZP-INSIDE-TO-OUTSIDE source INSIDE destination OUTSIDE

 service-policy type inspect INSIDE-TO-OUTSIDE-POLICY

There we finish the basic configuration of a zone based firewall.


You can use the below commands to perform some basic troubleshooting and verification.

a.) Show commands

show class-map type inspect

show policy-map type inspect

show zone-pair security

b.) Debug Commands

debug policy-firewall detail

debug policy-firewall events

debug policy-firewall protocol tcp

debug policy-firewall protocol udp

Secure Azure Databricks Deployment


Please take a note of Azure Databricks control plane endpoints for your workspace from here (map it based on region of your workspace). We’ll need these details to configure Azure Firewall rules later.

databricks-webappAzure Databricks workspace subnetsRegion specific Webapp Endpointhttps:443Communication with Azure Databricks webapp
databricks-webappAzure Databricks workspace subnetsRegion specific Webapp Endpointhttps:443Communication with Azure Databricks webapp
databricks-observability-eventhubAzure Databricks workspace subnetsRegion specific Observability Event Hub Endpointhttps:9093Transit for Azure Databricks on-cluster service specific telemetry
databricks-artifact-blob-storageAzure Databricks workspace subnetsRegion specific Artifact Blob Storage Endpointhttps:443Stores Databricks Runtime images to be deployed on cluster nodes
databricks-dbfsAzure Databricks workspace subnetsDBFS Blob Storage Endpointhttps:443Azure Databricks workspace root storage
(OPTIONAL – please see Step 3 for External Hive Metastore below)
Azure Databricks workspace subnetsRegion specific SQL Metastore Endpointtcp:3306Stores metadata for databases and child objects in a Azure Databricks workspace
Configure Azure Firewall Rules

With Azure Firewall, you can configure:

    • Application rules that define fully qualified domain names (FQDNs) that can be accessed from a subnet.
    • Network rules that define source address, protocol, destination port, and destination address.
    • Network traffic is subjected to the configured firewall rules when you route your network traffic to the firewall as the subnet default gateway.
Configure Application Rule

We first need to configure application rules to allow outbound access to Log Blob Storage and Artifact Blob Storage endpoints in the Azure Databricks control plane plus the DBFS Root Blob Storage for the workspace.

    • Go to the resource group, and select the firewall.
    • On the firewall page, under Settings, select Rules.
    • Select the Application rule collection tab.
    • Select Add application rule collection.
    • For Name, type databricks-control-plane-services.
    • For Priority, type 200.
    • For Action, select Allow.
    • Configure the following in Rules -> Target FQDNs
NameSource typeSourceProtocol
IP AddressAzure
Databricks workspace subnets
https:443Refer notes
from Prerequisites
(for Central US)
IP AddressAzure
Databricks workspace subnets
https:443Refer notes
from Prerequisites
(for Central US)
This is separate
log storage only
for US regions today
IP AddressAzure
Databricks workspace subnets
https:443Refer notes
from Prerequisites
(for Central US)
databricks-dbfsIP AddressAzure
Databricks workspace subnets
https:443Refer notes
from Prerequisites
Public Repositories for
Python and R Libraries(OPTIONAL –
if workspace users are
allowed to install libraries
from public repos)
IP AddressAzure
Databricks workspace subnets
Add any other
public repos as
Used by Ganglia UIIP AddressAzure
Databricks workspace subnets or
Configure Network Rule

Some endpoints can’t be configured as application rules using FQDNs. So we’ll set those up as network rules, namely the Observability Event Hub and Webapp.

    • Open the resource group adblabs-rg, and select the firewall.
    • On the firewall page, under Settings, select Rules.
    • Select the Network rule collection tab.
    • Select Add network rule collection.
    • For Name, type databricks-control-plane-services.
    • For Priority, type 200.
    • For Action, select Allow.
    • Configure the following in Rules -> IP Addresses.
NameProtocolSource typeSourceDest
Dest Ports
TCPIP AddressAzure
Databricks workspace
IP AddressRefer notes
from Prerequisites
above (for Central US)
TCPIP AddressAzure
Databricks workspace
IP AddressRefer notes
(for Central US)
please see
Step 3 for External Hive
TCPIP AddressAzure
Databricks workspace subnets
IP AddressRefer notes
(for Central US)

Below is a terraform script to add rules to the Azure Firewall.

# Priority range -14150 – 14159
resource “azurerm_firewall_policy_rule_collection_group” “data-archive” {
count = var.ENVIRONMENT == “npd” ? 1 : 0
name = “${module.names-group-data-office.standard[“afw-policy-group”]}-data-archive”
firewall_policy_id =
priority = 14150

# Col1
application_rule_collection {
name = “${module.names-data-archive.standard[“afw-rule-collection”]}-perimeter”
priority = 3200
action = “Allow”
# Rule 1
rule {
name = “${module.names-data-archive.standard[“afw-rule”]}-allow-ado-agents-https-outbound”
protocols {
type = “Https”
port = 443
terminate_tls = true
source_addresses =
destination_urls = [
““, # required for partner terraform provider download (databricks) ““, # required for partner terraform provider download (databricks)
“” #github redirects to this. there is no way to make a specific rule as the rest is a SAS token which changes every time ] } # Rule 2 rule { # Allow databricks subnets access to databricks APIs name = “${module.names-data-archive.standard[“afw-rule”]}-allow-agent-databricks-api-calls” protocols { type = “Https” port = 443 } terminate_tls = true source_addresses = destination_fqdns = [ “” #Calling databricks API for terraform creation of databricks objects
# Rule 3
rule {
# Allow databricks subnets access to Maven repo URLs – called by Databricks to install Java libraries needed by Spark
name = “${module.names-data-archive.standard[“afw-rule”]}-allow-databricks-maven-calls”
protocols {
type = “Https”
port = 443
terminate_tls = true
source_addresses =
destination_urls = [
““, ““,

# Col 2
network_rule_collection {
# needed for dbricks to work – step 4
name = “${module.names-data-archive.standard[“afw-rule”]}-dataarchive-databricks-net”
priority = 3100
action = “Allow”
# Rule 2
rule {
# Allow databricks subnets to access the databricks webapp
# Tried to use application rule but it failed complaining that it needed Target Fqdns,Target Urls, FqdnTags or WebCategories.
name = “${module.names-data-archive.standard[“afw-rule”]}-allow-dbricks-webapp”
protocols = [“TCP”]
source_addresses =
destination_addresses = [“”]
destination_ports = [“443”]
# Rule 1
rule {
# Allow databricks subnets to access the observability hub
name = “${module.names-data-archive.standard[“afw-rule”]}-allow-dbricks-observability”
protocols = [“TCP”]
source_addresses =
destination_fqdns = [“”] # obervability address for uksouth for databricks
destination_ports = [“9093”]

lifecycle {
create_before_destroy = true

Create User Defined Routes (UDRs)

At this point, the majority of the infrastructure setup for a secure, locked-down deployment has been completed. We now need to route appropriate traffic from Azure Databricks workspace subnets to the Control Plane SCC Relay IP (see FAQ below) and Azure Firewall setup earlier.

    • On the Azure portal menu, select All services and search for Route Tables. Go to that section.
    • Select Add
    • For Name, type firewall-route.
    • For Subscription, select your subscription.
    • For the Resource group, select adblabs-rg.
    • For Location, select the same location that you used previously i.e. Central US
    • Select Create.
    • Select Refresh, and then select the firewall-route-table route table.
    • Select Routes and then select Add.
    • For Route name, add to-firewall.
    • For Address prefix, add
    • For Next hop type, select Virtual appliance.
    • For the Next hop address, add the Private IP address for the Azure Firewall that you noted earlier.
    • Select OK.

Now add one more route for Azure Databricks SCC Relay IP.

    • Select Routes and then select Add.
    • For Route name, add to-central-us-databricks-SCC-relay-ip.
    • For Address prefix, add the Control Plane SCC relay service IP address for Central US from here. Please note that there could be more than one ip addresses for relay service and in that case add additional rules on the UDR accordingly. In order to get SCC relay IP, please run nslookup on the relay service endpoint e.g.,
    • For Next hop type, select Internet, although it says Internet, traffic between Azure Databricks data plane and Azure Databricks SCC relay service IP stays on Azure Network and does not travel over public internet, for more details please refer to this guide).
    • Select OK.

The route table needs to be associated with both of the Azure Databricks workspace subnets.

    • Go to the firewall-route-table.
    • Select Subnets and then select Associate.
    • Select Virtual network > azuredatabricks-spoke-vnet.
    • For Subnet, select both workspace subnets.
    • Select OK.

Below if the terraform code:

   "routeTable": {
      "disableBgpRoutePropagation": true,
      "routes": [
          "name": "default-via-fw",
          "addressPrefix": "",
          "nextHopIpAddress": "",
          "nextHopType": "VirtualAppliance"
          "name": "to-uk-south-databricks-webapp",
          "addressPrefix": "",
          "nextHopIpAddress": "",
          "nextHopType": "Internet"
          "name": "to-uk-south-databricks-scc-relay",
          "addressPrefix": "",
          "nextHopIpAddress": "",
          "nextHopType": "Internet"
          "name": "to-uk-south-databricks-control-plane",
          "addressPrefix": "",
          "nextHopIpAddress": "",
          "nextHopType": "Internet"
          "name": "to-uk-south-databricks-extended-infrastructure",
          "addressPrefix": "",
          "nextHopIpAddress": "",
          "nextHopType": "Internet"

Azure Create Application Gateway that hosts multiple web sites using Azure PowerShell

You can use Azure Powershell to configure the hosting of multiple web sites when you create an application gateway. In this article, you define backend address pools using virtual machines scale sets. You then configure listeners and rules based on domains that you own to make sure web traffic arrives at the appropriate servers in the pools. This article assumes that you own multiple domains and uses examples of and

In this article, you learn how to:

  • Set up the network
  • Create an application gateway
  • Create backend listeners
  • Create routing rules
  • Create virtual machine scale sets with the backend pools
  • Create a CNAME record in your domain
Multi-site routing example

If you don’t have an Azure subscription, create a free account before you begin.


This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install Azure PowerShell.

Use Azure Cloud Shell

Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell preinstalled commands to run the code in this article without having to install anything on your local environment.

To start Azure Cloud Shell:

Select Try It in the upper-right corner of a code block. Selecting Try It doesn’t automatically copy the code to Cloud Shell.Example of Try It for Azure Cloud Shell
Go to, or select the Launch Cloud Shell button to open Cloud Shell in your browser.
Select the Cloud Shell button on the menu bar at the upper right in the Azure portal.Cloud Shell button in the Azure portal

To run the code in this article in Azure Cloud Shell:

  1. Start Cloud Shell.
  2. Select the Copy button on a code block to copy the code.
  3. Paste the code into the Cloud Shell session by selecting Ctrl+Shift+V on Windows and Linux or by selecting Cmd+Shift+V on macOS.
  4. Select Enter to run the code.

If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run Get-Module -ListAvailable Az . If you need to upgrade, see Install Azure PowerShell module. If you’re running PowerShell locally, you also need to run Login-AzAccount to create a connection with Azure.

Create a resource group

A resource group is a logical container into which Azure resources are deployed and managed. Create an Azure resource group using New-AzResourceGroup.Azure PowerShellCopyTry It

New-AzResourceGroup -Name myResourceGroupAG -Location eastus

Create network resources

Create the subnet configurations using New-AzVirtualNetworkSubnetConfig. Create the virtual network using New-AzVirtualNetwork with the subnet configurations. And finally, create the public IP address using New-AzPublicIpAddress. These resources are used to provide network connectivity to the application gateway and its associated resources.Azure PowerShellCopyTry It

$backendSubnetConfig = New-AzVirtualNetworkSubnetConfig `
  -Name myBackendSubnet `

$agSubnetConfig = New-AzVirtualNetworkSubnetConfig `
  -Name myAGSubnet `

$vnet = New-AzVirtualNetwork `
  -ResourceGroupName myResourceGroupAG `
  -Location eastus `
  -Name myVNet `
  -AddressPrefix `
  -Subnet $backendSubnetConfig, $agSubnetConfig

$pip = New-AzPublicIpAddress `
  -ResourceGroupName myResourceGroupAG `
  -Location eastus `
  -Name myAGPublicIPAddress `
  -AllocationMethod Dynamic

Create an application gateway

Create the IP configurations and frontend port

Associate the subnet that you previously created to the application gateway using New-AzApplicationGatewayIPConfiguration. Assign the public IP address to the application gateway using New-AzApplicationGatewayFrontendIPConfig.Azure PowerShellCopyTry It

$vnet = Get-AzVirtualNetwork `
  -ResourceGroupName myResourceGroupAG `
  -Name myVNet


$gipconfig = New-AzApplicationGatewayIPConfiguration `
  -Name myAGIPConfig `
  -Subnet $subnet

$fipconfig = New-AzApplicationGatewayFrontendIPConfig `
  -Name myAGFrontendIPConfig `
  -PublicIPAddress $pip

$frontendport = New-AzApplicationGatewayFrontendPort `
  -Name myFrontendPort `
  -Port 80

Create the backend pools and settings

Create the first backend address pool for the application gateway using New-AzApplicationGatewayBackendAddressPool. Configure the settings for the pool using New-AzApplicationGatewayBackendHttpSettings.Azure PowerShellCopyTry It

$contosoPool = New-AzApplicationGatewayBackendAddressPool `
  -Name contosoPool

$fabrikamPool = New-AzApplicationGatewayBackendAddressPool `
  -Name fabrikamPool

$poolSettings = New-AzApplicationGatewayBackendHttpSettings `
  -Name myPoolSettings `
  -Port 80 `
  -Protocol Http `
  -CookieBasedAffinity Enabled `
  -RequestTimeout 120

Create the listeners and rules

Listeners are required to enable the application gateway to route traffic appropriately to the backend address pools. In this article, you create two listeners for your two domains. Listeners are created for the and domains.

Create the first listener using New-AzApplicationGatewayHttpListener with the frontend configuration and frontend port that you previously created. A rule is required for the listener to know which backend pool to use for incoming traffic. Create a basic rule named contosoRule using New-AzApplicationGatewayRequestRoutingRule.Azure PowerShellCopyTry It

$contosolistener = New-AzApplicationGatewayHttpListener `
  -Name contosoListener `
  -Protocol Http `
  -FrontendIPConfiguration $fipconfig `
  -FrontendPort $frontendport `
  -HostName ""

$fabrikamlistener = New-AzApplicationGatewayHttpListener `
  -Name fabrikamListener `
  -Protocol Http `
  -FrontendIPConfiguration $fipconfig `
  -FrontendPort $frontendport `
  -HostName ""

$contosoRule = New-AzApplicationGatewayRequestRoutingRule `
  -Name contosoRule `
  -RuleType Basic `
  -HttpListener $contosoListener `
  -BackendAddressPool $contosoPool `
  -BackendHttpSettings $poolSettings

$fabrikamRule = New-AzApplicationGatewayRequestRoutingRule `
  -Name fabrikamRule `
  -RuleType Basic `
  -HttpListener $fabrikamListener `
  -BackendAddressPool $fabrikamPool `
  -BackendHttpSettings $poolSettings

Create the application gateway

Now that you created the necessary supporting resources, specify parameters for the application gateway using New-AzApplicationGatewaySku, and then create it using New-AzApplicationGateway.Azure PowerShellCopyTry It

$sku = New-AzApplicationGatewaySku `
  -Name Standard_Medium `
  -Tier Standard `
  -Capacity 2

$appgw = New-AzApplicationGateway `
  -Name myAppGateway `
  -ResourceGroupName myResourceGroupAG `
  -Location eastus `
  -BackendAddressPools $contosoPool, $fabrikamPool `
  -BackendHttpSettingsCollection $poolSettings `
  -FrontendIpConfigurations $fipconfig `
  -GatewayIpConfigurations $gipconfig `
  -FrontendPorts $frontendport `
  -HttpListeners $contosoListener, $fabrikamListener `
  -RequestRoutingRules $contosoRule, $fabrikamRule `
  -Sku $sku

Create virtual machine scale sets

In this example, you create two virtual machine scale sets that support the two backend pools that you created. The scale sets that you create are named myvmss1 and myvmss2. Each scale set contains two virtual machine instances on which you install IIS. You assign the scale set to the backend pool when you configure the IP settings.Azure PowerShellCopyTry It

$vnet = Get-AzVirtualNetwork `
  -ResourceGroupName myResourceGroupAG `
  -Name myVNet

$appgw = Get-AzApplicationGateway `
  -ResourceGroupName myResourceGroupAG `
  -Name myAppGateway

$contosoPool = Get-AzApplicationGatewayBackendAddressPool `
  -Name contosoPool `
  -ApplicationGateway $appgw

$fabrikamPool = Get-AzApplicationGatewayBackendAddressPool `
  -Name fabrikamPool `
  -ApplicationGateway $appgw

for ($i=1; $i -le 2; $i++)
  if ($i -eq 1) 
    $poolId = $contosoPool.Id
  if ($i -eq 2)
    $poolId = $fabrikamPool.Id

  $ipConfig = New-AzVmssIpConfig `
    -Name myVmssIPConfig$i `
    -SubnetId $vnet.Subnets[1].Id `
    -ApplicationGatewayBackendAddressPoolsId $poolId

  $vmssConfig = New-AzVmssConfig `
    -Location eastus `
    -SkuCapacity 2 `
    -SkuName Standard_DS2 `
    -UpgradePolicyMode Automatic

  Set-AzVmssStorageProfile $vmssConfig `
    -ImageReferencePublisher MicrosoftWindowsServer `
    -ImageReferenceOffer WindowsServer `
    -ImageReferenceSku 2016-Datacenter `
    -ImageReferenceVersion latest `
    -OsDiskCreateOption FromImage

  Set-AzVmssOsProfile $vmssConfig `
    -AdminUsername azureuser `
    -AdminPassword "Azure123456!" `
    -ComputerNamePrefix myvmss$i

  Add-AzVmssNetworkInterfaceConfiguration `
    -VirtualMachineScaleSet $vmssConfig `
    -Name myVmssNetConfig$i `
    -Primary $true `
    -IPConfiguration $ipConfig

  New-AzVmss `
    -ResourceGroupName myResourceGroupAG `
    -Name myvmss$i `
    -VirtualMachineScaleSet $vmssConfig

Install IIS

Azure PowerShellCopyTry It

$publicSettings = @{ "fileUris" = (,""); 
  "commandToExecute" = "powershell -ExecutionPolicy Unrestricted -File appgatewayurl.ps1" }

for ($i=1; $i -le 2; $i++)
  $vmss = Get-AzVmss `
    -ResourceGroupName myResourceGroupAG `
    -VMScaleSetName myvmss$i

  Add-AzVmssExtension -VirtualMachineScaleSet $vmss `
    -Name "customScript" `
    -Publisher "Microsoft.Compute" `
    -Type "CustomScriptExtension" `
    -TypeHandlerVersion 1.8 `
    -Setting $publicSettings

  Update-AzVmss `
    -ResourceGroupName myResourceGroupAG `
    -Name myvmss$i `
    -VirtualMachineScaleSet $vmss

Create CNAME record in your domain

After the application gateway is created with its public IP address, you can get the DNS address and use it to create a CNAME record in your domain. You can use Get-AzPublicIPAddress to get the DNS address of the application gateway. Copy the fqdn value of the DNSSettings and use it as the value of the CNAME record that you create. Using A-records isn’t recommended because the VIP may change when the application gateway is restarted in the V1 SKU.Azure PowerShellCopyTry It

Get-AzPublicIPAddress -ResourceGroupName myResourceGroupAG -Name myAGPublicIPAddress

Test the application gateway

Enter your domain name into the address bar of your browser. Such as,

Test contoso site in application gateway

Change the address to your other domain and you should see something like the following example:

Test fabrikam site in application gateway
Clean up resources
When no longer needed, remove the resource group, application gateway, and all related resources using Remove-AzResourceGroup.

Azure PowerShell


Try It
Remove-AzResourceGroup -Name myResourceGroupAG

AzCopy large files to Azure Data Lake Storage

There are many ways to upload files to Azure Data Lake Storage (ADLS) Gen2.In this article, we will compare two popular ways for an organization to upload files to ADLS. Mainly, we will compare below parameters to identify which one suits best for your need:

  1. Performance: Speed by which the file is being uploaded
  2. Ease: How easy it is to set up and use
  3. Automation: Where there will be or not any manual intervention after operationalization

In this tutorial we will rate each of the above three parameters with the range from 1 to 5, with 1 being lowest and 5 being highest supported/achievable parameter.

We will be testing these two approaches to upload 10 GB test file:

  1. AzCopy
  2. Azure Storage Explorer

Without further ado, let’s get started.


In order to transfer file using AzCopy, you will need AzCopy, which you can download from here.

Once the AzCopy is downloaded, lets create a ADLS Gen2 for our tutorial. I have already created one for the purpose of this tutorial and a container for the AzCopy test:

Screenshot example of AzCopy ADLS Gen2 setup

In order to use AzCopy to transfer file to container, we need a SAS token, lets generate our SAS token:

Click on “Shared access signature” under “Security + networking” within Storage account

Screenshot example of SAS token generation

I have selected 1-day range for the SAS token to be active. Then click the “Generate SAS and Connection string” button

Screenshot example range for SAS token

Copy the SAS token from the available URLs, it should look like below:


Now that we have SAS token lets generate our AzCopy command, below is the syntax for AzCopy command-line:

azcopy copy '' ''

// TIP This example encloses path arguments with single quotes (''). 
// Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). 
// If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').

I have placed the sample file and AzCopy application both in the root folder on C drive called “AzCopy”. Replacing the values in above syntax, below is what we get:

azcopy copy “c:\AzCopy\10GB.bin” “”

AzCopy being command-line utility, we will have to open command prompt and execute above command, at the end of the execution, AzCopy will provide a summary of transfer. Let’s execute:

Screenshot example command line summary of ADLS transfer with AzCopy and SAS token authorization

So, it took 16.34 mins to transfer 10 GB file using AzCopy and SAS token authorization.

Now let’s try using Storage Explorer.

Storage Explorer

Azure Storage Explorer has graphical user interface to interact with storage account on Azure, which makes it more user friendly, but here we are trying to compare which one of these two options is best for transferring big files to ADLS Gen 2. Let’s download the Storage Explorer from here. Execute the executable file downloaded and follow the installation wizard.

Once installed, the storage explorer should automatically start with the setup screen to connect to Azure Storage, select “ADLS Gen2 container or directory” option:

Screenshot example Azure Storage Explorer to ADLS Gen2 setuo

Now that we have the SAS token we generated during AzCopy test, lets select SAS URL option on next screen and then click “Next”:

Screenshot example Azure Storage Explorer select connection method

Input the friendly name in Display name and paste the SAS URL and click next:

Screenshot example Azure Storage Explorer connection info

Click “Connect” on the Summary screen:

Screenshot example summary screen

This step will connect to your storage container, the next step is simply using GUI to click “Upload”, select the file and start uploading:

Screenshot example Storage Explorer upload

We can monitor the transfer on the bottom pane:

Screenshot example Storage Explorer monitor transfer

The total time taken for Storage explorer was 18.03 minutes.


Let’s use our parameters to identify which approach is best between above two approaches to transfer big files to Azure Storage

Performance: Speed by which the file is being uploaded

I would rate performance of the AzCopy better than Storage Explorer, it took ~3 mins less than storage explorer:

  • AzCopy – 5 points
  • Storage Explorer – 3 points

Ease: How easy it is to set up and use

Storage Explorer is much easier to use than AzCopy because of the user-friendly GUI, while AzCopy is command-line utility and requires command formation to be executed

  • AzCopy – 3 points
  • Storage Explorer – 5 points

Automation: Where there will be or not any manual intervention after operationalization

AzCopy can be automated using Windows Task, PowerShell, and other orchestration and parameterized executions, while Storage Explorer is completely manual, we cannot automate jobs or tasks using Storage explorer GUI

  • AzCopy – 4 points
  • Storage Explorer – 1 point
AzCopy vs Storage Explorer comparison table

Though both of the options have pros and cons, AzCopy can cover all scenarios of Storage Explorer transferring to/from ADLS Gen2, while some use cases might not be achievable using Storage Explorer, especially automation.

Additional Notes:

Azure AD PIM Request to enable contributor access to Azure resources.

Cmd command:

azcopy login – To authenticate to Azure AD

Azure AD:
azcopy copy “C:\AzCopy/FujitsuOB/textfile.txt” “

Azure Storage Access Token:
azcopy copy “C:\AzCopy/FujitsuOB/textfile.txt” “” –recursive=true

Azure AppGW WAF Inspect Request Body

For Application gateway with WAF_V2 SKU, it is now in public Preview to increase the request body size to 2000KB(2MB) as long as we are using OWASP 3.2 in the WAF.

For Application gateway with WAF_V2 SKU, it is now in public Preview to increase the request body size to 2000KB(2MB) as long as we are using OWASP 3.2 in the WAF.

Recommended size is 128k.

Configure Azure Application Gateway (WAF) to preserve client ip address

HTTP requests often pass through one or more proxy servers before they reach the endpoint web server, which changes the source IP address for the request. As a result, endpoint web servers cannot rely on the source IP from the network connection (socket) to be the IP address of the original request. For this reason, you may want to use one of two options to preserve the original client IP address: X-Forwarded-For (XFF), or transparent proxy.

The backend server sees requests via its logs as coming from the Application Gateway (via its private IP Address) and not on the requesting “public” IP Address.

How do I configure WAF so that it will preserve/pass the requests as coming from public P address (not the private IP address)?

We are unable to preserve the client IP because the Application gateway is a proxy. It will replace the original client IP with the Application gateway instance IP and forward requests to the backend server. However, Application gateway inserts extra headers to all requests before it forwards the requests to the backend. It includes the x-forwarded-for header which has the original client IP information.

You can configure Application gateway to modify request and response headers and URL by using Rewrite HTTP headers and URL or to modify the URI path by using a path-override setting. However, unless configured to do so, all incoming requests are proxied to the backend.
You can use header rewrite to remove the port information from the X-Forwarded-For header to only keep the IP addresses.