Matt Ouellette is a certified information technology professional residing in Southwest Michigan. His technology findings and advice can be found on his PacketPilot blog. Mr. Ouellette spent 4 years as an I.T. Technician before stepping into a Network Engineer role at Bronson Health Group. Since completing his Associates Degree in Network Administration Matt has taken a head on approach to career enrichment through obtaining credentials such as CCNP, CCNA Voice, MCSA: Server 2008, and VCP5. This passion for continued learning allows him to deliver up to date quality technical solutions.
Notes from MS Learn AZ-700 Module 6: Design and implement network security – Unit 2: Get Network Security Recommendations with Microsoft Defender for Cloud
Network security is various tech, devices, processes and provides rules and configs to protect the CIA of networks/data. Every org though have some sort of network security
Notes from MS Learn AZ-700 Module 4: Load balance non-HTTP(S) traffic in Azure – Unit 6: Exercise – Create a Front Door for a Highly Available Web Application
Tasks (taken from MS Learn: Items without “Task” in front of them are personal additions)
Task 1: Create two instances of a web app
Search and select App Services in Azure Portal
Click Create
Choose or Create new under Resource Group
Enter Unique name under Instance Details
Choose a Runtime stack from dropdown
Choose a region from dropdown
Choose or create new under Windows Plan (Create new in this example)
Enter Unique Name
Click OK
Click Review + Create
Once validated click Create
Repeat for second App Service
Task 2: Create a Front Door for your application
Search and click Front Door and CDN profiles in Azure Portal
Click Create
Leave Quick Create selected
Click Continue to create a Front Door
Select (or create new) a Resource group from the dropdown (choose from dropdown in this example)
Enter unique name under Profile details
Enter unique name under Endpoint settings
Select “App services” from Origin type dropdown
Select Origin host name from dropdown
Click Review + Create
Once Validated click Create
Click Go to resource once deployment complete
Click Origin groups
Click created origin group (default-origin-group in this example)
Click Add an origin
Enter Unique name
Select Origin type from dropdown
Select Host name from dropdown
Click Add
Click Update
Task 3: View Azure Front Door in action
Click Overview under the Frontdoor page
Copy Endpoint hostname and navigate to address
Search and click App Services in Azure Portal
Check box next to first App Service (WebAppContoso-2001)
Select Stop in menu bar
Select Yes to verify
Click refresh in menu bar
App should show status stopped
Go back to App browser tab and refresh
Should appear the same
Go back to Azure and stop second webapp
Return to App browser tab – refresh – should show Error code as unavailable now that both are stopped
Notes from MS Learn AZ-700 Module 4: Load balance non-HTTP(S) traffic in Azure – Unit 5: Design and Configure Azure Front Door
Front Door is MS modern cloud Content Delivery Network (CDN) providing fast, reliable, secure access between users and apps. Delivered using MS global edge network with hundreds of global/local POPs close to both enterprise network and consumer end users.
Orgs have apps to make available to customers, suppliers, users, etc. They must be highly available. Additionally they nee quick response and security. Front Door has many SKUs to achieve this.
Secure/Modern cloud CDN offers distributed platforms of servers. This minimizes latency. IT may desire to use CDN and web app FW to control HTTP/HTTPS traffic tx/rx target apps
Products available (Table below from MS Learn)
Azure Front Door comparison
Offered in 2 tiers
Standard
Premium
Combine capability of Front Door (classic), CDN Standard from MS (classic) and Azure WAF into single secure cloud CDN with intelligent threat protection
Notes from MS Learn AZ-700 Module 4: Load balance non-HTTP(S) traffic in Azure – Unit 4: Configure Azure Application Gateway
App Gateway has components combining to route requests to pool and verify server health. (Image taken from MS Learn
Frontend configuration
App Gateway can have
Public IP
Private IP
Both
Backend configuration
Backend pool used to route requests to backend servers
Create empty backend pool and then add backend targets later is possible
Targets include
NICs
Public IP’s
Private IP’s
VM Scale Sets
Configure Health Probes
Azure App Gateway by default monitors back-end pool resources
Automatically removes unhealthy resources from pool
Continuous monitoring of unhealthy and adds them back when return to healthy
By default, probes same port defined in back-end HTTP settings
Custom probe can be configured with custom port
Source IP addr depends on backend pool
If server addr is public, source is app gateways frontend public IP
If server addr is private, source is app gateway subnet private IP address space
Default health probe
App gateway has default health probe if custom probes no configured
Monitoring via HTTP GET to IP or FQDN configured in back-end pool
Default is configured for HTTPS
Example
App gateway set to use backend receiving HTTP on port 80
Default is every 30 seconds with a 30 second timeout
HTTP looking for response code 200-399
If fails for a server in pool forwarding stopped to server
Requests restart when successful response
Default health probe settings (Table below from MS Learn)
Probe propertyValueDescription Probe URL <protocol>://127.0.0.1:<port>/ The protocol and port are inherited from the backend HTTP settings to which the probe is associated Interval 30 The amount of time in seconds to wait before the next health probe is sent. Time-out 30 The amount of time in seconds the application gateway waits for a probe response before marking the probe as unhealthy. If a probe returns as healthy, the corresponding backend is immediately marked as healthy. Unhealthy threshold 3 Governs how many probes to send in case there’s a failure of the regular health probe. In the v2 SKU, the health probes wait the probe interval before checking again. The back-end server is marked unreachable after the consecutive probe failure count reaches the unhealthy threshold.
Probe intervals
All App Gateway probes to backend independent of each other
Same probe config applies to each App Gateway instance
If multiple. Listeners, each probes backend independently
Custom health probe
Custom probes provide more granular health monitoring
Custom Probes
Custom hostname
URL path
Probe interval
Number of failed responses before unhealthy
Settings (Table below from MS Learn)
Probe propertyDescription Name Name of the probe. This name is used to identify and refer to the probe in back-end HTTP settings. Protocol Protocol used to send the probe. This property must match with the protocol defined in the back-end HTTP settings. Host Host name to send the probe. Path Relative path of the probe. A valid path starts with ‘/.’ Port If defined, this property is used as the destination port. Otherwise, it uses the same port as the HTTP settings that it’s associated to. This property is only available in the v2 SKU. Interval Probe interval in seconds. This value is the time interval between two consecutive probes Time-out Probe time-out in seconds. If a valid response isn’t received within this time-out period, the probe is marked as failed. Unhealthy threshold Probe retry count. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold.
Probe matching
Default: an HTTP(S) response code between 200-399 is healthy
Custom probes support two matching criteria to optionally modify default sense of “healthy”
HTTP response status code
Probe matching for user specified http code or range.
Comma-separated status codes or ranges supported
HTTP body match
Probe matching looking at HTTP body response matching user specified string
Looks only for presence of user specified string in body, not a full regular expression match.
Match criteria can be specified using New-AzApplicationGatewayProbeHealthResponseMatch cmdlet
Configure listeners
Logical entity checking for incoming requests using port, protocol, host, IP Addr
When configured must enter values that match inbound request values on gateway
When creating App Gateway in portal you also create default listener by choosing the protocol and port
Choose whether or not HTTP2 is enabled
Can edit default listener (appGatewayHttpListener) or create new
Listener Types
Basic
All requests accepted and sent to back-end
Multi-site
Requests sent to different back-ends based on host header/host name
Must specify a host name that matches incoming request
Order of processing listenters
V1 SKU
Requests matched in order of rules and type of listener
V2 SKU
Multi-site processed before basic
Front-end IP address
Choose IP planned to associate with listener
Listener listens requests on this IP
Front-end port
Choose port (new or existing)
Any value from allowed range
Can be used for public or. Private facing listeners
Protocol
HTTP
Traffic between client/app gw unencrypted
HTTPS
TLS term or end-to-end TLS encryption
TLS terminates at app gw
Traffic between client and app gw encrypted
If end-to-end TLS desired – choose HTTPS and configure back-end HTTP setting
Configuring TLS term and end-to-end TLS encryption
Must add certificate to listener so gw can derive symmetric key
Symmetric key used to encrypt and decrypt gw traffic
GW certificate must be in Personal Information Exchange (PFX) format
Allows export of private key for gw to use for encrypt/decrypt
Redirection Overview
App Gateway can redirect traffic
GW has generic redirection allowing for received traffic on one listener to send to another or external site
Simplifies app config and optimize resource useage
Types
301 Perm redirect
302 Found
303 See Other
307 Temp redirect
Capabilities
Global Redirection
From one. Listener to another on GW.
Enables HTTP to HTTPS on a site
Path-based Redirection
Enables HTTP to HTTPS only on specific site area
Redirect to external site
Requires new redirect config object
This specifies the target listener or external site
Element also supports options for appending URI path and query to redirected URL
Attached to source listener via new rule
App Gateway request routing rules
Request Routing Rule is key component of App GW as it determines how traffic is routed on the listener
Rule binds listener, backend pool, and backend HTTP settings
Accepted request, routing rule forwards it to backend or redirects
If forwarded – routing rule defines which pool to send to
Also determines if headers need to be rewritten
One listener to one rule
Types
Basic
All request associated to listener forwarded to backend using associated HTTP setting
Path-based
Allows routing request on listener to specific backend based on URL
If path of URL matches pattern it is routed per rule
Applies to path pattern only for URL path not parameters
If URL path does not match route to default backend and HTTP settings
HTTP settings
App GW routes to backend using port number, protocol, and other detailed settings
Port and Proto used in HTTP settings determine whether traffic between app gw and backend encrypted or not
Additional Uses
Determine whether user session kept on same server using cookie-based session affinity
Gracefully remove backend members using connection draining
Associate custom probe for backend monitoring
Set request timeout interval
Override host name and path
Provide one-select ease to specify settings for backend
Notes from MS Learn AZ-700 Module 5: Load balance HTTP(S) traffic in Azure – Unit 2: Design Azure Application Gateway
Azure Application Gateway processes traffic to web apps on a pool of servers. Includes load balancing HTTP and inspecting traffic using web app FW. Includes encrypting traffic between users and app gateway, and traffic between app servers and app gateway.
App Gateway provides load balancing HTTP traffic and web app firewall. Provides support for TLS/SSL encryption between users and app gateway, app servers and app gateway.
Uses round-robin process for load balancing to back-end pool. Session stickiness ensures requests in same session routed to same back-end server. Important for e-commerce apps
Features
Support for HTTP/HTTPS/HTTP2/WebSocket protocols
Web App FW to protect against web app vulnerabilities
End-to-end request encryption
Autoscaling to dynamically adjust capacity as web traffic load change
Connection draining allows graceful removal of back-end pool members during planned updates
Public or Private IP
Listeners
App Gateway routes to backend pool per rule
Backends
VM
VM Scale Set
IP Address
App Service
Health Probes to monitor Backends
Application Gateway components
Front-end IP
Public
Private
Both
Cannot have more than one of each
Listener
One or more to receive incoming request
Accepts traffic on specified combo of
Protocol
Port
Host
IP
Routes request to back-end pool of servers based on routing rules
Basic or Multisite
Basic
Only routes based on path in URL
Multisite
Can also route using hostname of URL
Can handle TLS/SSL certificates
Routing Rules
Binds listener to back-end pool
Specifies how to interpret hostname and path elements
Has associated set of HTTP settings
Indicate whether/how traffic is encrypted between App Gateway and back-end servers
Additional config info
Protocol
Session stickiness
Connection draining
Request timeout
Health Probes
Load balancing in Application Gateway
App Gateway automatically balances requests in each back-end pool using round-robin
Works with OSI L7 routing based on hostnames and paths
In comparison, others such as Azure Load Balancer are at L4 based on IP addr of target
Possible to configure session stickiness
Web Application Firewall
Web App Firewall (WAF) optional
Handles incoming requests before they reach a listener
Checks for common threats based on Open Web Application Security Project (OWASP)
Common threats
SQL-Injection
Cross-site scripting
Command injection
HTTP request smuggling
HTTP response splitting
Remote file inclusion
Bots
Crawlers
Scanners
HTTP proto violations/anomalies
OWASP defines set of rules for detection
Called Core Rule Set (CRS)
CRS under constant review
WAF supports
CRS 3.2
CRS 3.1 (default)
CRS 3.0
CRS 2.2.9
Can customize FW for elements in request to examine and limit size of messages
Back-end Pools
Collection of web servers
VMs
VM Scale-set
App Service
On-prem servers
Each has associated load balancer
Provide IP or Name of each webserver when configuring
All servers in pool should have identical configurations
If using TLS/SSL – HTTP setting references certificated used to auth back-end pool servers
Gateway re-encrypts using certificate before sending to server
If using App Service no need to install certificated in Gateway
Communication automatically encrypted – App Gateway trusts because Azure manages them
App Gateway uses rule to specify how to direct messages received to back-end pool.
If using TLS/SSL must configure rule
Server expect traffic through HTTPS
Certificate used to encrypt traffic and auth connection to server
Application Gateway Routing
When gateway routes request it uses rule set configured for the gateway to determine path
Path-Based routing
Sends requests with diff URL to diff pools
Multiple-site routing
Configures more than one web app on same gateway
Register multiple DNS names (CNAME) for IP of app gateway specifying name of each site
App gateway uses separate listeners for requests to each site
Listener passes request to diff rule for routing to diff back-end pools
Useful for supporting multitenant apps
Features
Redirection
Used to another site or from HTTP to HTTPS
Rewrite HTTP Headers
HTTP headers allow client and server to pass parameter info with request or response
Custom error pages
App gateway allows custom error pages
TLS/SSL Termination
Offloads CPU-intensive termination from servers
No need to install certificates/configure TLS/SSL on server
If end-to-end encryption needed App gateway can decrypt on gateway using private key, then re-encrypt with public key of service in back-end pool
Traffic enters gateway through front-end port
Open many ports possible
Listener first thing traffic meets entering gateway through port
Listener set up to listen for specific host name and port on an IP
Listener can use TLS/SSL cert to decrypt
Then uses rule defined to direct request to back-end pool
Exposure of web app through gateway = no direct connection of servers to web
Exposes only port 80 or 443 on gateway
Health Probes
Determine which servers are available
App gateway uses prove to send request
Server returns HTTP response with status code between 200 and 399 as healthy
Default probe waits 30 second if custom probe not created
Autoscaling
Supported and scales up/down based on traffic load patterns
Removes requirement to choose deployment size or instance count during provisioning
WebSocket and HTTP/2 traffic
Native support for these
Enables full duplex communication between server and client over long-run TCP connection
More interactive between server and client and can be bidirectional without polling as required in HTTP-based implementations
These has low overhead and can reused same TCP connection for multiple request/response
Notes from MS Learn AZ-700 Module 4: Load balance non-HTTP(S) traffic in Azure – Unit 6: Exercise – Create a Traffic Manager Profile Using the Azure Portal
Tasks (taken from MS Learn: Items without “Task” in front of them are personal additions)
Task 1: Create the web apps
Search and select App Services in Azure Portal
Click Create
Select or Create New Resource Group (Create new in this example)
Enter Unique Name
Click OK
Enter Name in box under Instance Details
Select Runtime stack from dropdown
Select Windows Plan or Create new (create new in this example)
Enter unique name
Click OK
Click Next : Deployment >
Click Next : Networking >
Click Next : Monitoring >
Toggle Enable Application Insights to No
Click Review + create
Click Create
Repeat for second webapp changing the location
Task 2: Create a Traffic Manager profile
Search and click Traffic Manager Profiles in Azure Portal
Click Create
Enter unique Name
Select Routing Method from dropdown
Select Resource Group from dropdown
Click Create
Click Refresh
Task 3: Add Traffic Manager endpoints
Choose the newly created TM profile
Click Endpoints under settings
Click Add
Enter unique name
Choose Target resource type from dropdown
Choose Target resource from dropdown
Click Add
Repeat Add steps above for failover (second) endpoint
Notes from MS Learn AZ-700 Module 4: Load balance non-HTTP(S) traffic in Azure – Unit 5: Explore Azure Traffic Manager
Azure Traffic Manager is DNS based. Allows distribution of traffic to public facing apps across global Azure regions. Also, public endpoints with HA and fast response.
Uses DNS to direct client requests to service endpoint based on traffic-routing. Also has health monitor for endpoints. Endpoint can be internet-facing service inside or outside Azure. Offers range of traffic-routing methods. Endpoint monitoring options suit different app requirements and automatic failover. Resilient to failure even of an entire region
Key Features
(Table below from MS Learn)
How Traffic Manager Works
Enables control of traffic distribution across app endpoints. Endpoint = internet-facing hosted inside/outside Azure
Traffic Manager Key Benefits
Distribution of traffic per one of traffic-routing methods
Continuous monitoring of endpoints and auto failover when endpoints fail
Client must first resolve DNS name to IP. Client connects to IP of services
Traffic Managers utilizes DNS to direct client to endpoint based on rules of traffic-routing method.
Client connects to selected endpoint directly
Traffic Manager isn’t proxy or gateway, does not see traffic between client and service
DNS Level with is App layer (L7)
Traffic Manager example deployment
Example Steps
Deploy 3 instances of service.
DNS names are different
Create Traffic Manager Profile
Configure to use Performance traffic-routing method across the 3 endpoints
Configure vanity domain name
Points to traffic manager profile name using DNS CNAME record
Traffic Manager client usage example
Client request follows example steps
Client queries DNS to configured recursive DNS service to resolve Traffic Manager profile name
Recursive DNS finds name server for DNS name to lookup name servers for main DNS and returns CNAME record
Recursive DNS service finds trafficmanager name provided by trafficmanager service
Traffic Manager name server choose endpoint based on
Configured state of endpoint
Health of endpoint (based on health checks)
Chosen traffic-routing method
Chosen endpoint CNAME returned
Recursive lookup for A record of primary domain
Client receives DNS results to connect to given IP directly
Recursive DNS caches responses it receives. DNS on client also caches results.
TTL on caches can be as low as 0 sec and and high as 2,147,483,647 sec (per RFC-1035)
Search and Select Traffic Manager Profile from Azure Portal
Select Create
Enter values
Name – Unique name for profile
Routing Method
Subscription
Resource Group – Existing or create new
Resource Group Location – Azure TM is global. This refers to location of selected RG
Click Create
Add endpoints to TM profile
From portal select All Resources
Select TM profile
Under Settings click Endpoints > Add
Enter required values
Type
Azure endpoint
External endpoint
Nested endpoint
Name – Unique to endpoint
Target resource type (for Azure endpoints only) – if Azure endpoint
Cloud service
App Service
App Service slot
Public IP address
Target resource (for Azure and Nested endpoints only)
Target Service
IP address
Profile
FQDN or IP (for External endpoints only)
Specify FQDN or IP for endpoint
Priority
If 1 all traffic goes to this endpoint when healthy
Minimum child endpoints (nested only)
Specify min number of endpoints available in child TM profile for it to receive traffic
If threshold not reached, endpoint considered degraded
Customer Header (optional)
Configure customer headers for endpoint with format
Host:domain.com,customerheader:name
Max pairs = 8
HTTP and HTTPS
Overrides settings configured in profile
Add as disabled (optional)
Disabling endpoint in TM can be useful to temp remove traffic from endpoint in maintenance or redelployment
Once running can be re-enabled
Click Add
If adding failover endpoint for another region, add endpoint for said region. App target in other region has priority of 2
When adding endpoints to TM profile status is checked
Once validated Monitor status becomes “Online”
Configuring endpoint monitoring
Azure Traffic Manager has build-in endpoint monitoring and auto endpoint failover to aid in delivering HA apps resilient to endpoint failure including Azure region failures
Example Steps
Open Config page for TM profile
Under Endpoint Monitor settings specify settings
(Table below from MS Learn)
Click Add
If adding failover endpoint for another region
Add another endpoint for that region
App target resource in other region and priority setting 2
When endpoints added to TM profile, status is checked
Once checked should result in Monitor Status of Online
Configuring Endpoint Monitoring
Azure Traffic Manager has built-in endpoint monitoring with auto failover
Sample steps
Open Configuration page for Traffic Manager Profile
Under endpoint monitor settings (Table below from MS Learn)
SettingDescription Protocol Chooses HTTP, HTTPS, or TCP as the protocol that Traffic Manager uses when probing your endpoint to check its health. HTTPS monitoring doesn’t verify whether your TLS/SSL certificate is valid; it only checks that the certificate is present. Port Chooses the port used for the request. Path This configuration setting is valid only for the HTTP and HTTPS protocols, for which specifying the path setting is required. Providing this setting for the TCP monitoring protocol results in an error. For HTTP and HTTPS protocol, give the relative path and the name of the webpage or the file that the monitoring accesses. A forward slash (/) is a valid entry for the relative path. This value implies that the file is in the root directory (default). Custom Header settings This configuration setting helps you add specific HTTP headers to the health checks that Traffic Manager sends to endpoints under a profile. The custom headers can be specified at a profile level to be applicable for all endpoints in that profile and / or at an endpoint level applicable only to that endpoint. You can use custom headers for health checks of endpoints in a multitenant environment. That way it can be routed correctly to their destination by specifying a host header. You can also use this setting by adding unique headers that can be used to identify Traffic Manager originated HTTP(S) requests and processes them differently. You can specify up to eight header:value pairs separated by a comma. Example – header1:value1, header2:value2 Expected Status Code Ranges This setting allows you to specify multiple success code ranges in the format 200-299, 301-301. If these status codes are received as response from an endpoint when a health check is done, Traffic Manager marks those endpoints as healthy. You can specify a maximum of eight status code ranges. This setting is applicable only to HTTP and HTTPS protocol and to all endpoints. This setting is at the Traffic Manager profile level and by default the value 200 is defined as the success status code. Probing interval This value specifies how often an endpoint is checked for its health from a Traffic Manager probing agent. You can specify two values here: 30 seconds (normal probing) and 10 seconds (fast probing). If no values are provided, the profile sets to a default value of 30 seconds. Visit the Traffic Manager Pricing page to learn more about fast probing pricing. Tolerated number of failures This value specifies how many failures a Traffic Manager probing agent tolerates before marking that endpoint as unhealthy. Its value can range between 0 and 9. A value of 0 means a single monitoring failure can cause that endpoint to be marked as unhealthy. If no value is specified, it uses the default value of 3. Probe timeout This property specifies the amount of time the Traffic Manager probing agent should wait before considering a health probe check to an endpoint a failure. If the Probing Interval is set to 30 seconds, then you can set the Timeout value between 5 and 10 seconds. If no value is specified, it uses a default value of 10 seconds. If the Probing Interval is set to 10 seconds, then you can set the Timeout value between 5 and 9 seconds. If no Timeout value is specified, it uses a default value of 9 seconds.
Click Save
How endpoint monitoring works
When monitoring is set as HTTP or HTTPS TM probing makes GET requests to endpoint.
Endpoint healthy if probe receives a 200-OK response or any response configured in Expected status code ranges
When monitoring protocol is TCP TM probe creates TCP connection requests using port configured
If endpoint responds with establishment it’s considered a success
TM probe resets TCP connection
If response different value or no response within timeout probe reattempts according to Tolerated Number of Failures
No reattempts if set to 0
If higher than value endpoint unhealthy
In all cases TM probe from multiple locations
Consecutive failure determines what happens in each region
Endpoints receiving probes from TM with higher frequency
For HTTP/HTTPS common practice on endpoint is a custom page in app to check
All endpoints in TM profile share monitoring settings – if different needed create nested TM profiles
Notes from MS Learn AZ-700 Module 4: Load balance non-HTTP(S) traffic in Azure – Unit 3: Design and Implement Azure Load Balancer Using the Azure Portal
Operates at L4 of OSI model. Single point of contact for clients. Azure Load Balancer distributes inbound flows from the front end to the backend pool. Follow load-balancing rules and health probes. Backend pools can be Azure VMs or instances in a VM scale set.
Choosing a load balancer type
Two types of load balancers
Public
Can provide outbound connections for VMs inside VNet
Connections via translating private IP to public IP
Used to distribute client traffic from internet across VMs
Internet traffic source examples
Browsers
Module Apps
Etc
Internal
Use where private Ips needed at frontend only
Use to load balance traffic from internal Azure resources to other resources in VNet
Frontend can also be accessed from on-prem in hybrid scenario
Azure Load Balancer and Availability Zones
Azure services supporting availability zone categories
Zonal: Resources assigned to specific zone
Zone-redundant: Resources replicated or distributed across zone automatically. Replicated across 3 zones
Nonregional: Service always available from Azure geographies, resilient to zone-wide outage and region-wide outage
Load Balancer supports availability zones.
Use Standard Load Balancer to increase availability throughout scenario by aligning resources with and distribution zones
Load Balancer can be
Zone Redundant
In region with availability zones, Standard Load Balancer
Single Frontend IP survives zone failure
Frontend IP can be used to reach all backend pool members in any zone
One or more availability zone can fail and data path survives if one zone remains
Zonal
Frontend guaranteed to a single zone
Data path unaffected by failure in zone it’s guaranteed in
Can expose frontend IP per availability zone
Frontend directly load balanced endpoints in each zone supported
Use to expose per zone load-balanced endpoints to individually monitor each zone
Public endpoints: integrate them with DNS load-balancing like Traffic Manager and use single DNS name
For public frontend add zones parameter to public IP
IP is frontend IP config used by respective rule
Internal frontend add zones parameter to internal load-balancer frontend IP config.
Guarantees IP address in subnet to specific zone
Nonzonal
Selecting an Azure Load Balancer SKU
Two SKU’s Available
Basic
Standard
SKU’s differ in
Scope/Scale
Features
Cost
Any Basic scenario also possible under Standard
(Table below from MS Learn)
MS recommends Standard
Standalone VM, Availability sets, VM scale sets connect to only one SKU (not both)
Load balancer and public IP addr SKU must match when used together
SKU’s not mutable – cannot change SKU of existing resource
Creating and configuring an Azure load balancer
Several tasks for successful creation
Create load balancer
Example – Public (external) load balancer in Basic SKU
Search and click Load Balancer in Azure Portal
Click Create and enter the following as required
Subscription
Resource Group
Name
Region
Type – Internal in this example
Internal (private)
Public (external)
SKU – Standard in this examples
Standard – Use for production
Basic – Use for testing/eval and training
Tier (only in Standard)
Regional – within a region
Global – across regions
Public IP Address
New
Existing
Can specify name, dynamic/static
Can add IPv6 addr
Click Review + Create
Click Create once validated
Select Go to resource
Add backend pool
Example Steps
From Azure portal select All resources
Choose your new load balancer
Choose backend pools under settings
Choose Add
Enter Name
Virtual Network – specify name of VNet resources are located
Associate to – Associate backend pool to 1 or more VM or VM scale set
IP Version – IPv4 or IPv6
Can add existing VMs to backend pool or create and add later
Click Add
Add VM to backend pool
Example Steps
On Backend pools select the new backend pool
Virtual network – Specify name of VNet backend resources located
Associated to – backend pool with one or more VMs or VM scale sets
IP Version – IPv4 or IPv6
Click Add
Click Save
Add health probes
Example Steps
On backend pools page click Health probes under settings
Click add on Health Probes Page
Name – Unique name for probe
Protocol – TCP or HTTP
Port – Dest port (default 80)
Interval – In seconds (default 5)
Unhealth threshold – # of probe failures before VM considered unhealthy (default 2)
Click Add
Add a load balancer rule
Rule distributes inbound traffic across group of backend pools. Only health backend pools receive traffic
Example Steps
On Health probe page of load balancer select Load balancing rules under settings
Click Add
Name – Unique name
IP Version – IPv4 or IPv6
Frontend IP – Select existing public-facing IP of load balancer
Protocol – TCP or UDP
Port – default is 80
Backend port – can choose to route to backend VM on different port
Backend pool – Choose existing backend pool. VM’s are target for LB traffic
Health prove – Choose existing or create new
Session persistence – Specifies traffic processed by same VM in session
None
Successive request handled by any VM
Client IP
Successive requests from client IP handled by same VM
Client IP and protocol
Successive request from same client IP and Proto handled by same VM
Idle timeout (in minutes) – Time to keep TCP/HTTP connection open without client keep-alive -Default 4 mins (minimum setting) 30 minutes (maximum setting)
Floating IP
Enabled
Azure changes IP addr mapping to Frontend IP of load balancer
Disabled
Azure exposes traditional load balancing IP addr mapping
Click Add
Test load balancer
Copy/Paste public IP into browser to receive a response from a VM.
Refresh multiple times to make sure you get other VM responses