Wednesday, July 12, 2017

RDS modern infrastructure and HTML5 for RDS announced!

Within the RDS MVP group we had already discussed this, and the information is now public! I’m super excited about this step!

RDS modern infrastructure is announced!

“…The RDS modern infrastructure components provide functionality that extends the current RD Web Access, RD Gateway, and RD Connection Broker services, as well as adding a new RD Diagnostics service. The RDS modern infrastructure components are implemented as .NET Web Services enabling a wide variety of deployment options. For example:

Both single and multi-tenant deployments, making smaller deployments (less than 100 users) much more economically viable, while providing the necessary security of tenant isolation

Deployments on Microsoft Azure, on-premises equipment, and hybrid configurations

Virtual machines or Azure App Services can be used for deployment

Azure App Services, part of Azures Platform-as-a-Service environment, simplifies the deployment and management of the RDS modern infrastructure because it abstracts the details of the virtual machines, networking, and storage. This simplifies administrative tasks like configuring scale out/in of a service to dynamically and automatically handle fluctuating usage patterns…”

And also, HTML5 for Remote Desktop Services is coming!

“…The web client, combined with the other RDS modern infrastructure features, allows many Windows applications to be easily transformed into a Web-based Software-as-a-Service (SaaS) application without having to rewrite a line of code…”

More details will be shared in today’s session at Microsoft Inspire.

Source: https://blogs.technet.microsoft.com/enterprisemobility/2017/07/12/today-at-microsoft-inspire-next-generation-architecture-for-rds-hosting/

Afbeeldingsresultaat voor remote desktop

Friday, July 7, 2017

Real time logging your Microsoft RDS environment using PowerShell

All Remote Desktop Services events logs in a single pane? Every RDS event from machine A and B that has written an event in last 10 minutes? Listen to events from RDS event logs in real time from all RDS related servers in your deployment?

Jason Gilbertson, a Technical Advisor at Microsoft who works closely with the RDS Product team wrote a single PowerShell that does all of the above, and much more!!

Some of the features:

- Export logs locally or remotely to .csv format on local machine grouped by machine name

- Convert *.evt* files to .csv

- View and manage 'debug and analytic' event logs

- Listen to event logs real-time from local or remote machines displaying color coded messages in console

Although the script is very multifunctional, it has specific parameters for RDS to allow you to collect RDS related event log from all servers that are running RDS roles. So, for example, you can combine all event logs from your RD Connection Broker-, RD Web Access-, RD Gateway- and RD Session Host Servers in single view.

The script also exports to CSV which allows you to feed the exports into Excel Graphs or PowerBI environments for further analysis.

A couple of examples;

Query rds event logs for last 10 minutes on a remote RD Connection Broker Server
PS C:\>.\event-log-manager.ps1 -rds -minutes 10 -Machines rdcb-01

clip_image002

Below is what the command outputs to CSV:

clip_image004

Example command to enable ‘debug and analytic’ event logs for 'rds' event logs and 'dns' event logs:
PS C:\>.\event-log-manager.ps1 –enableDebugLogs -eventLogNamePattern dns -rds -machines rdcb-01

clip_image006

Below is what the command outputs to CSV:

clip_image008

Example command to listen to multiple RD Gateway Servers for all eventlogs related to Remote Desktop Services to get live results
PS C:\> .\event-log-manager.ps1 -listen -rds -machines RDGW-01, RDGW-01

Below is a sample output

clip_image010

These were only a few RDS related examples, but the script Jason created has awesome capabilities! It’s available on TechNet Gallery here: https://gallery.technet.microsoft.com/Windows-Event-Log-ad958986

Tuesday, June 6, 2017

Presented a session at E2EVC in Orlando, Florida on RDS, ARM, JSON and stroopwafels!

Last week, right after Citrix Synergy, Alex Cooper hosted yet another awesome edition of the Experts 2 Experts Virtualization conference (E2EVC) in Orlando, Florida! If you’re not familiar with the event, please check out e2evc.com. It’s a vendor neutral, virtualization conference focusing on all that is related to End User Computing covering topics like RDS, VDI, RemoteApp, Application Virtualization and munch more. Sessions are presented by the community, and because of the vendor neutral approach, you’ll see a good mix of sessions related to Microsoft, Citrix, VMWare, Parallels and many other products as well.

This is what Alex says about E2EVC;

"...E2EVC Virtualization Conference Events is a series of worldwide non-commercial, virtualization community Events. Our main goal is to bring the best virtualization experts together to exchange knowledge and to establish new connections. E2EVC is crammed with presentations, Master Classes and discussions delivered by both virtualization vendors product teams and independent experts. Over 50 of the best virtualization community experts present their topics..."

Last week the Orlando edition was on the agenda. I presented a session Azure Resource manager, JSON Templates and doing a fully automated deployment of RDS running on Azure IaaS. The session was entitled:

Grab a Stroopwafel while we watch ARM do an automated RDS deployment in Azure IaaS

The idea behind the session was to perform the ARM deployment live on stage while enjoying a stroopwafel J. And so, I actually brought stroopwafels for the entire audience. The deployment finished successfully within 31 minutes. After the deployment was completed I did a demo of the end result, an entire HA RDS deployment running on Azure IaaS including things like Load Balancing, SQL Server, SSL certificates, publishing RemoteApps, branding RD Web Access, configuring RD Gateway Policies and much more!

Thanks, everyone who attended my session! Thanks, Alex for hosting an awesome community event, and thanks to the all the sponsors including Nvidia, Citrix and ControlUp!

Start of the session while all attendees enjoyed a stroopwafel :)clip_image002

2 slides from the deck I presented to give you an idea about what the JSON template creates.clip_image004clip_image006

E2EVC will publish the recording of the session on their you tube channel.If you have questions on the content, need help with creating JSON templates, feel free to reach out!

Thursday, April 20, 2017

Dude, where’s my OneDrive for Business Sync in RDS? FSLogix to the rescue!

image_thumb
OneDrive for Business Sync and cache inside RDS & VDI environments, what are the options? Whats supported? Here’s what you need to know!



Introduction
Office 365 is a worldwide adopted SaaS offer these days. As part of Office 365, Skype for Business and OneDrive for Business are very commonly used products in the Office 365 suite. In many architected solutions Office 365 Pro Plus is published as one of the applications suites using Remote Desktop Services. These can vary from shared sessions on a RD Session Host to dedicated virtual machines on a Virtual Desktop Infrastructure and both are capable of publishing either a Full Desktop or separate RemoteApps. This allows users to access the full Office 365 suite, including Visio, Project et cetera, on any device and from any location.

The Challenge
Publishing Office 365 as described above however, results in an interesting challenge, what about OneDrive for Business access? Since many years, users have been provided access to a Home Drive, a space where they can store personal data and files. Typically, this was a drive mapping (most of the time mapped as the H: drive) pointing to a share on a file server. With Office 365, users have access to OneDrive for Business. The same space where they can store personal data and files, this time however hosted in the Cloud and accessible on any device at any time. A great solution offering a lot of flexibility to the end user.

Where a classic Home Drive is typically hosted on a file server that is close to the client, OneDrive for Business is hosted in the public Cloud. This means that the network connection (bandwidth, latency packet loss) now suddenly play an important role into the overall performance of working with these files. To overcome this issue, and to also allow offline access, OneDrive for Business comes with Synchronization. This basically means data stored on OneDrive for Business is also cached on the local client and synched to Cloud, a process in the background and transparent to the end user.

For a users’ personal device, this sync process is great! But what about an RDS or VDI hosted solution? How do we make sure users have access to OneDrive for Business from their hosted Remote Desktop or RemoteApps? Can we provide a home drive like experience that users a used to? And more importantly, can we have OneDrive for Business cache inside those environments? What are the options?

Discussing the options

- OneDrive for Business and Roaming Profiles
Although Roaming Profiles is an ancient solution to allow user to roam their profile & data across multiple RD Session Host Servers, I still see it being used in older environments. With a roaming profile, a user has a centrally stored profile stored on a file server and during logon and logoff, the delta of that profile is synched with the locally cached copy of the profile. If we would configure OneDrive for Business in such an environment, the OneDrive for Business cache would be synched back ‘n forth for each user at logon and logoff. Depending on the amount of data, this kills the user experience. Redirecting AppData? Don’t even go there :)

- OneDrive for Business and User Profile Disk
User Profile Disk (UPD) was introduced in Windows Server 2012. UPD also allows users to roam their profile across multiple RD Session Host Servers. This time however, not by synching the delta to and from a file server, but by mounting a dedicated .VHDX file per user, stored on a file server that contains the entire users’ profile. To accomplish this, UPD makes sure a mount path is created under C:\users\<username>. Because of this, any application that writes to and reads from the user’s profile ‘thinks’ its accessing a locally stored profile, but it is in fact a mounted file. Although this technology is fully transparent for most applications, there are important exceptions. UPD mounts the .VHDX with a symbolic link. The catch here is that Windows won’t send file change notifications. As a result, OneDrive for Business cache won’t know that files have changed and will get out of sync.

- OneDrive for Business drive mapping
A workaround used in some environments is to create a drive mapping pointing directly towards OneDrive for Business in the Cloud. This workaround simulates a home drive like experience because the end user will be presented with an H: mapping. However, since this a WebDAV connection, the experience is not too good. User will start to see delays when opening and saving files.

- Direct access to OneDrive for Business from within applications
Office 365 and Pro Plus have obviously also been improving over time. Now allowing users to access OneDrive for Business directly from the Open and Save menu’s inside for example Word or Excel. And Outlook also allows direct interaction with attachments coming from OneDrive for Business. Although this user experience is quite good, it does require a good user adoption strategy. Users need to be guided into this new way of working with their personal data. The other catch is obviously that non-Office applications generally don’t support direct open and save to OneDrive.

- OneDrive for Business and FSLogix™ Office 365 Container
FSLogix is a unique company that focusses on extending RDS and VDI solutions with a, in terms of footprint, very small yet very powerful tools that makes the life of an end user and administrator much easier. They have several tools in their portfolio, but for this use case we’ll focus on their Office 365 Container product (which I also talked about before in this article). That tool does exactly what the name implies. It provides 3 main solutions:

* True Cached Exchange Mode
Providing what they call OST containerization, allowing Outlook to function and perform as if locally installed on a high performance virtual workspace session

* Outlook Real-Time Search
This enables inbox and personal folder search to work as designed with maximum performance

* OneDrive and OneDrive for Business
This roams OneDrive (for Business) user data seamlessly without the need to resync at each logon.

The latter is of course exactly what we were looking for in the use case discussed in this article! How does it work? The FSLogix agent, that is running on the RD Session Host or VDI and configured using GPO, makes sure that the OneDrive (for Business) data is captured in an isolated container. The technology seems similar to Microsoft User Profile Disk, however FSLogix operates on a lower level of the operating system to ensure that file changes are noticed and processed. Also, the FSLogix streaming technology can cache OneDrive files in the situations where network connectivity to the file server goes temporarily offline. This is also important to ensure OneDrive files do not corrupt upon network interruption.

clip_image002[4]_thumb[1]

What’s also interesting about the Office 365 Container is that it can be layered on top of any existing profile solution you might have. It’s fully transparent and profile solution agnostic, and even the OneDrive for Business application is roamed in the O365 Container.

Conclusion
It’s clear that both the Roaming Profile and User Profile Disk option are a no go when it comes to OneDrive for Business Caching. The drive mapping might work in some scenario’s but must clearly be considered as a work around. The option to guide users to access OneDrive for Business directly from within their applications is a viable solution, but does not solve the users request to easily navigate using file explorer, and this approach mostly only works inside Office applications. FSLogix Office 365 Container is clearly the winner providing both the full OneDrive for Business Sync options users expect, without the need for a complex software suite or application back end.

Got excited about FSLogix Office 365 Container? Get more info of request a trial here: https://fslogix.com/products/office-365-container

Thursday, April 13, 2017

Securing RD Gateway with MFA using the new NPS Extension for Azure MFA!

Introduction
Back in 2014 I co-authored an article together with Kristin Griffin on how to secure RD Gateway with Azure MFA. This article was based on putting an Azure MFA Server (previously Phone Factor) in place in your on-premises environment (or Azure IaaS) to act as the MFA Server and enforce Multifactor Authentication for all session coming through RD Gateway. You can get the article here: Step By Step – Using Windows Server 2012 R2 RD Gateway with Azure Multifactor Authentication. Although this is a great solution and I have successfully implemented this for various customers, the big downside has always been the mandatory MFA Server. As part of the setup, a server in your Active Directory had to be installed running the Azure MFA Server component to be able to inject Azure MFA into the login sequence. Not only did you have to install and maintain this MFA Server, synching and managing users (and in most cases you would set up 2 servers to be HA), the other downside was that this was yet another MFA Provider for your end user. MFA server comes with a self-service portal to allow users to do their own enrollment and it can leverage the same Azure Authenticator App. However, if your end users used Azure MFA to secure e.g. Office365 or other SaaS services, that would be a different MFA provider, with a different Self Service signup sequence etc.

Introducing the NPS Extension for Azure MFA
So what has changed? A few days ago Microsoft announced the availability of the Azure MFA Extension for NPS (preview)! Read about the announcement where Alex Simons, Director of Program Management of the Microsoft Identity Division and Yossi Banai, Program Manager on the Azure Active Directory team talk about this new (preview) feature here:

Azure AD News: Azure MFA cloud based protection for on-premises VPNs is now in public preview!

Although the article specifically talks about securing a VPN, I figured the same would apply to secure Remote Desktop Gateway. And it turned out it does! In my lab I was able to successfully secure RD Gateway with Azure MFA using this new Extension for NPS! In this article I want to take you through the setup process and show the end result.

Prerequisites
There are a few prerequisites to use the NPS extension for Azure MFA, these are:

- License
For this to work you obviously need a license for Azure MFA. This is included with Azure AD Premium, EM+S, or it can be based on an Azure MFA subscription

- NPS Server
A Server is needed where the NPS role is installed. This needs to be at least Windows Server 2008 R2 SP1 and can be combined with other roles, however it cannot be combined with the RD Gateway role itself.

- Libraries
The below two libraries are needed on the NPS server, although Microsoft Guidance says the NPS Extension installer performs those installations if they are not in place, it doesn’t. Be sure to download and install these components prior to installing the NPS Extension.

1 Microsoft Visual Studio 2013 C++ Redistributable (X64)
2 Microsoft Azure Active Directory Module for Windows PowerShell version 1.1.166

- Azure Active Directory

Obviously Azure Active Directory has to be in place and users who need access, need to have been enabled to use MFA.

Installing
As mentioned in the introduction, I have written an article on securing RD Gateway with Azure MFA Server before. As you read though the installation & configuration process, you’ll see similarities with this article. That is not a coincidence, the same basic principles of RD Gateway, RD CAP, Radius Client, Remote Radius Servers et cetera all also apply on this setup.

Installing and configuring AAD & AAD Sync
Note, if you already have AAD & AAD Sync in place you can obviously skip this paragraph.
First things first, you need Azure Active Directory as a prerequisite. I won’t go over the entire process of setting up ADDS and AAD because there are many guides out there that explain this process very well. Basically you create a new AAD using the Azure Classic portal (or PowerShell), similar to below.
clip_image002[4]

With AAD in place, you can then start to sync your users from an on premises ADDS (or like in my case one that is running on Azure IaaS). To manage the AAD you can already use the New Azure Portal as shown below, although do be aware that this feature is still in preview in this portal. You can also use this portal to get a link to the most recent version or Microsoft Azure Active Directory Connect that you need to be able to sync users from ADDS to AAD.
clip_image004[4]

Again, I won’t go into great detail explaining the installation & best practices of installing AAD Connect, if you need detailed guidance on that part, check Connect Active Directory with Azure Active Directory. Basically what you do is run the installer on a server that is part of your ADDS domain and the only thing you will have to provide are the credentials of an AAD account and an ADDS connect with the appropriate permissions to access both domains.
clip_image006[4]

Once you have successfully finished the setup of AAD Connect and the initial synchronization took place, the portal will reflect this as shown below.
clip_image008[4]

With the initial synchronization complete, you can now start to assign Azure MFA to your users. To do this, open the All Users section in the Azure Portal and click on the Multi-Factor Authentication link.
clip_image010[4]

That will take you to the Azure MFA Management Portal. In the screenshot below you can see the steps to enable and enforce Azure MFA for my test user called rdstestmfa.
clip_image012[4]


Installing and configuring the NPS Extension for Azure MFA
Now that we have AAD and AAD Sync in place, lets drill down into the actual installation of the NPS Extension for Azure MFA! The first step is to download the latest version of the installer, which can be found here: NPS Extension for Azure MFA.

The NPS Extension needs to be installed on a (virtual) server that is part of the ADDS domain and that is able to reach the RD Gateway. In my case I used an ou-of-the-box Windows Server 2016 VM in Azure IaaS, but it can be anything from Windows Server 2008 R2 SP1 or above. Before installing the Extension, 3 other requirements need to be place.

1. The NPS Server role needs to be installed. Open Server Manager and add the role called Network Policy and Access Services.
clip_image014[4]

2. The library Microsoft Visual Studio 2013 C++ Redistributable (X64) needs to be installed. Microsoft documentation says this library is installed automatically as part of the NPS Extension installer, the current Preview version 0.9.0.1 does however not do this yet. You can get the download here

3. The Microsoft Azure Active Directory Module for Windows PowerShell version 1.1.166 needs to be installed. Again, Microsoft documentation says this module is installed automatically as part of the NPS Extension installer, but the current Preview version 0.9.0.1 does not do this yet. You can get that download here

Now that we have the prerequisites in place, we can start the NPS Extension installer. The setup is very straight forward, just hit Install and wait for the process to finish.
clip_image015[4]


After the installation is finished, the Extension components are placed in the folder C:\Program Files\Microsoft\AzureMfa\

Now open a new PowerShell Prompt (with elevated permissions) and change the directory to C:\Program Files\Microsoft\AzureMfa\Config and run the PowerShell script called AzureMfaNpsExtnConfigSetup.ps1. The output should look similar to below.
clip_image017[4]


While the PowerShell Script runs it will prompt you for the ID of your Azure AD tenant, you can find that in the Azure Portal, in the properties of your AAD domain.
clip_image018[4]


The PowerShell script will prompt you to authenticate to AAD with appropriate permissions. The PowerShell script then performs the following actions (source).

- Create a self-signed certificate.
- Associate the public key of the certificate to the service principal on Azure AD.
- Store the cert in the local machine cert store.
- Grant access to the certificate’s private key to Network User.
- Restart the NPS.

This completes the installation of the NPS Extension. The final step is to connect RD Gateway to this NPS Extension to get Azure MFA into the authentication process.

It’s important to realize that installing the NPS Extension causes all authentications processed by this NPS server to go through Azure MFA. There is no way to make exceptions for specific users.

Configuring RD Gateway
With the installation of the NPS Extension complete, it’s now time to configure RD Gateway. As mentioned before, this process is very similar to what Kristin Griffin and I explained here. The first step is to configure RD Gateway to use a Central Server running NPS. To do so, open RD Gateway Manager, right click the server name, and select Properties. Now select the RD CAP Store tab, select the Central Server running NPS option and enter the IP address of the NPS Server where you previously installed the NPS Extension. Also provide a shared key and store this somewhere safe.
clip_image019[4]


Now open NPS on the RD Gateway Server (not on the NPS Server that contains the NPS Extension, we’ll do that later).

Open the Remote RADIUS Server Groups and open the TS GATEWAY SERVER GROUP. Enter the IP Address of the NPS Server running the extension as a RADIUS Server, edit it and make sure the timeout settings match what is shown below.
clip_image021[4]


Now go to the RADIUS clients tab and create a new radius client with a friendly name, the IP address of the NPS Server running the Extension and enter the same shared secret you used before.
clip_image023[4]

Next, we need to configure two Connection Request Policies in NPS, one to forward requests to the Remote RADIUS Server Group (which is set to forward to NPS server running the extension), and the other to receive requests coming from MFA server (to be handled locally).

The easiest way to do this is to use the existing policy that was created when you created an RD CAP in RD Gateway. In NPS, expand the Policies section in the left side of the screen and then select Connection Request Policies. You should see a policy already created there, called TS GATEWAY AUTHORIZATION POLICY. Copy that Policy twice and rename those copies to “MFA Server Request No Forward” and “MFA Server Request Forward”.

Now edit the MFA Server Request No Forward and set the following settings, where Client IPv4 Address is the IP Address of the NPS Server running the NPS Extension. Make sure you also enable this policy.
clip_image025[4]

Now edit the MFA Server Request Forward and set the following settings, so that this rule forwards to the TS SERVER GATEWAY GROUP. Again, make sure you also enable this policy.clip_image027[4]

And lastly, disable existing TS GATEWAY AUTHORIZATION POLICY, and set the processing order of the rules as shown below.
clip_image029[4]

Configuring NPS ServerIt’s now time to configure the NPS Server running the extension to make sure it can send and receive RADIUS requests too. Open NPS on the NPS Server (not on the RD Gateway Server we did that before).

Open the Remote RADIUS Server Groups and create a new group called RDGW. Enter the IP Address of the RD Gateway as a RADIUS Server, edit it and make sure the timeout settings match what is shown below.
clip_image031[4]

Now go to the RADIUS clients tab and create a new radius client with a friendly name, the IP address of the RD Gateway Server and enter the shared secret you used before.
clip_image032[4]


Go to the Connection Request Policies tab and create a new Policy called To RDGW and use the source Remote Desktop Gateway. Set the condition to Virtual (VPN) and configure it to forward requests to the Remote Radius Group called RDGW that we created before. Make sure the policy is enabled. Below is was the Policy should look like.
clip_image034[4]


Create another Policy called From RDGW and again use the source Remote Desktop Gateway. Set the condition to Client IPv4 Address and enter the IP address of the RD Gateway server. Configure it to handle request locally. Make sure the policy is enabled. Below is was the Policy should look like.
clip_image036[4]

Preparing the user account for Azure MFA
Since our test user called rdstestmfa@rdsgurus.com is new to the organization, we first need to make sure that our test user is successfully configured to use Azure MFA. If your users are already configured for Azure MFA, you can obviously skip this step.

An easy way to do this is to logon to portal.office.com and sign in with the account. Since our test account was enforced to use Azure MFA, the portal will prompt us to configure MFA before we can continue. Click Set it up now to start that process.clip_image038[4]

In this case I chose Mobile App as the authentication method, downloaded the Azure Authenticator App for iOS and used that to scan the QR image on the portal. The Azure Authenticator App is available for Android, iOS of Windows Phone.clip_image040[4]

Click Done. To complete the verification, Azure MFA will now send an MFA request to the configured phone number of the user account.
clip_image042[4]

The user account is now ready to use for our RD Gateway setup! If you want more detailed information on the Azure MFA Service, you can find that here: What is Azure Multi-Factor Authentication?

Testing the functionality
It’s now finally time to take a look at the end result!

You can basically use any RDP Client that has support for RD Gateway. For this scenario we’ll use the RD Web Access page. We log on to RD Web Access with our rdstestmfa@rdsgurus.com account and open the Desktop. In this case we used a Full Desktop Scenario, but these could also have been RemoteApps. The RDP Client will be launched showing the state Initiating Remote Connections.clip_image044[4]

A few seconds later, the NPS Extension will be triggered to send Azure MFA a request to prompt our user for two-factor authentication.
clip_image046[4]

After pressing Verify on the Phone and without the user having to interact with the computer, the status changes to Loading the virtual machine.
clip_image048[4]


And the desktop is then launched.
clip_image050[4]

The end result is a great and seamless experience for the end user. Similar to using Azure MFA Server, but this time NPS directly contacting Azure MFA! This is a great improvement!

Eventlogs
When troubleshooting this setup, there are several Eventlogs that could come in handy.

The RD Gateway role logs event in the location:
Application and Services Logs > Microsoft > Windows > Terminal Services Gateway
Below is an example of the event that shows that end user met the requirements of the RD CAP.
clip_image051[4]

The NPS Service role logs event in the location:
Custom Views > Server Roles > Network Policy and Access Services
Below is an example of NPS Granting access to a user. You can also check the Windows Security log for Auditing events.
clip_image052[4]

And finally, the NPS Extension role logs event in the location:
Application and Services Logs > Microsoft > AzureMfa
clip_image054[4]

Additionally, you can also use the Azure MFA Portal to create reports on the usage of Azure MFA.
clip_image056[4]

Conclusion
This article ended up to become >2500 words, but I hope you find it valuable. To reiterate on what is explained in the instruction; MFA Server is this is a great solution. The big downside however has always been the mandatory MFA Server, and in most cases you would set up 2 of them to be HA. The other downside is that this was yet another MFA Provider for your end user. With the introduction of the NPS Extension for Azure MFA these downsides are now gone! You can now secure your RDS environment with Azure MFA without the need for MFA Server and a separate MFA Provider. I really believe this is a game changer not only for this scenario, but also for all other scenarios like VPN’s, websites et cetera where Azure MFA Server is currently in place. Great job by Microsoft and looking forward to this Extension becoming GA!

Wednesday, March 15, 2017

Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 9 Managed Disks

It’s been a while since I’ve published an article in my series on automated deployments of RDS on Azure IaaS, but here is part 9! In case you’ve missed the previous 8 articles, here they are.

1. Full HA RDS 2016 deployment in Azure IaaS in < 30 minutes, Azure Resource Manager
2. RDS on Azure IaaS using ARM & JSON part 2 – demo at Microsoft Ignite!
3. Video of Ignite session showing RDS on Azure IaaS deployment using ARM/JSON
4. Windows Server 2016 GA available in Azure! – used it to deploy RDS on Azure IaaS!
5. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 5
6. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 6 RD
Gateway

7. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 7 RD Web Access customization
8. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 8 Defender & BGinfo

clip_image002_thumb[1] This part 9 is all about Azure Managed Disks. Azure Managed Disks simplify disk management for Azure IaaS VMs a lot! With Managed Disks, the storage accounts associated with the VM disks are managed for you. You only have to specify the type (Premium or Standard) and the size of disk you need, and Azure creates and manages the disk for you.

Before we dive into improving the JSON Script with Managed Disks, let’s briefly touch on some of the most important advantages of Managed Disks; (source)

Simple and scalable VM deployment
Managed Disks handles storage for you behind the scenes. Previously, you had to create storage accounts to hold the disks (VHD files) for your Azure VMs. When scaling up, you had to make sure you created additional storage accounts so you didn’t exceed the IOPS limit for storage with any of your disks. With Managed Disks handling storage, you are no longer limited by the storage account limits (such as 20,000 IOPS / account). You also no longer have to copy your custom images (VHD files) to multiple storage accounts. You can manage them in a central location – one storage account per Azure region – and use them to create hundreds of VMs in a subscription.

Better reliability for Availability SetsManaged Disks provides better reliability for Availability Sets by ensuring that the disks of VMs in an Availability Set are sufficiently isolated from each other to avoid single points of failure. It does this by automatically placing the disks in different storage scale units (stamps). If a stamp fails due to hardware or software failure, only the VM instances with disks on those stamps fail.

Granular access controlYou can use Azure Role-Based Access Control (RBAC) to assign specific permissions for a managed disk to one or more users. Managed Disks exposes a variety of operations, including read, write (create/update), delete, and retrieving a shared access signature (SAS) URI for the disk.

Images
Managed Disks also support creating a managed custom image. You can create an image from your custom VHD in a storage account or directly from a generalized (sys-prepped) VM. This captures in a single image all managed disks associated with a VM, including both the OS and data disks.

For more information also see: Azure Managed Disks Overview And I also found this article a good read: Azure Managed Disks Deep Dive, Lessons Learned and Benefits

Now that we’re familiar with the concept of Azure Managed Disks, let’s see how we can leverage all of this in our ARM Template. If you’ve seen previous articles in this series, you’ll know that the ARM template already had availability sets for each RDS role housing least 2 machines with a load balancer in front. The concept of Availability Sets stays the same when moving to Azure Managed Disks, but we do need to tell ARM that we are housing VM’s with Managed Disks in order to take full advantage.

Previously, the declaration of our Availability Sets looked like below. This example is for the RD Gateway / RD Web Access servers, but we declared a separate Availability Set per Server Role in our previous templates.

{
  "apiVersion": "[variables('apiVersion')]",
  "type": "Microsoft.Compute/availabilitySets",
  "name": "[parameters('availabilitySetNameRDGW')]",
  "comments": "This resources creates an availability set that is used to make the RDGW Server Highly Available",
  "location": "[resourceGroup().location]",
  "tags": {
    "displayName": "RDGW AvailabilitySet",
    "Project": "[parameters('projectTag')]"
  }
},

To tell ARM we want to create an Availability Set that can house Virtual Machines based on Managed Disks, we need to make a few modifications.
{
  "apiVersion": "[variables('apiVersionPreview')]",
  "type": "Microsoft.Compute/availabilitySets",
  "name": "[parameters('availabilitySetNameRDGW')]",
  "comments": "This resources creates an availability set that is used to make the RDGW Server Highly Available",
  "location": "[resourceGroup().location]",
  "tags": {
    "displayName": "RDGW AvailabilitySet",
    "Project": "[parameters('projectTag')]"
  },
  "properties": {
    "platformUpdateDomainCount": 2,
    "platformFaultDomainCount": 2
  },
  "sku": {
    "name": "[variables('sku')]"
  }
},

As you can see a new API version is needed to able to use Managed Disks. This version needs to be “2016-04-30-preview” at this point, and we’ve declared that using the following variable.

"apiVersionPreview": "2016-04-30-preview",

Next we need to specify that this Availability Set will be based on managed disks, to do this we need to provide sku.name with the value “Aligned”.

"sku": "Aligned"

And Finally, we can further define the availability set by specifying an Update Domain Count and Fault Domain Count. What is the concept behind these properties? Here is what Microsoft says about these properties (source)

Update Domains
For a given availability set, five non-user-configurable update domains are assigned by default (Resource Manager deployments can then be increased to provide up to 20 update domains) to indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on.

Fault Domain
Fault domains define the group of virtual machines that share a common power source and network switch. By default, the virtual machines configured within your availability set are separated across up to three fault domains for Resource Manager deployments (two fault domains for Classic). While placing your virtual machines into an availability set does not protect your application from operating system or application-specific failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.

Since this deployment deploys each RDS Role on 2 Virtual Machines we configured 2 Update Domains and 2 Fault domains. When settings these properties consider the number of VM’s you are hosting and the maximum limit of each of these properties.

Now that we have covered the changes needed when specifying the Availability Sets, let’s take a look at the changes needed for defining Virtual Machines that we want to add to these Availability Sets.

Again, the first change that is needed the API Version. Similar to the Availability Set, the VM needs to be configured with API version “2016-04-30-preview”.

  "apiVersion": "[variables('apiVersionPreview')]",
  "type": "Microsoft.Compute/virtualMachines",
  "name": "[concat(parameters('hostNamePrefixRDGW'),'0', copyindex(1))]",
        "comments": "This resources creates VM’s that will host the RDGW/RDWA role",
Next, we need to change the declaration of the osDisk within the Virtual Machines. Before using Manged Disks we had declared this as followed

"osDisk": {
  "name": "[variables('virtualmachineosdisk').diskName]",
  "vhd": {
  "uri": "[concat('http://',variables('storageAccount').name,copyindex(1),'.blob.core.windows.net/vhds/',parameters('hostNamePrefixRDGW'),'0',copyindex(1),'-',variables('virtualmachineosdisk').diskName,'.vhd')]"
  },
  "caching": "[variables('virtualmachineosdisk').cacheOption]",
  "createOption": "[variables('virtualmachineosdisk').createOption]"
}
},

To tell ARM this Virtual Machine needs to use Managed Disks we change to above to the following.
"osDisk": {
  "name": "[concat(parameters('hostNamePrefixRDGW'),'0', copyindex(1),'-',variables('virtualmachineosdisk').diskName)]",
  "managedDisk": {
    "storageAccountType": "[variables('storage').type]"
  },
  "caching": "[variables('virtualmachineosdisk').cacheOption]",
  "createOption": "[variables('virtualmachineosdisk').createOption]"
},

So, instead of defining the storage account and path to the .vhd that needs to be used, we simple introduce the parameter ManagedDisk and specify the Storage Account Type.

In our Variables Section, we’ve declared this variable as followed
"storage": {
  "name": "[concat(uniquestring(resourceGroup().id), 'rdsarm')]",
  "type": "Premium_LRS"
},
The configuration above is applicable to Virtual Machines that we want to base on a Standard Image, in this case Windows Server 2016 Datacenter. You might recall from a previous article in this series, that we’ve been using a Custom Template Image for the RD Session Host Role to allow us to specify a custom image that contains our corporate applications. So, how does that all change with Managed Disks?

You might recall that previously (before Managed Disks) one of the prerequisites of our ARM Script was that a Storage Account needed to pre-exist that contained the Template Image for the RDSH Servers. Using the parameter existingCustomImageRDSH we provided the option to specify the location of the RDSH Custom Template Image.

"existingCustomImageRDSH": {
  "value": "https://tuie2b2tyw23yrdsrdsh1.blob.core.windows.net/...
},

Since we’re now using Managed Disks, there is no more need for a Storage Account housing the RDSH Template. In order to still allow us to specify a Custom Template Image we create a new Resource in Azure called Image.

To move the existing RDSH Template image on the Storage Account to an Image resource, that we can use within the creation of VM with a Managed Disk, I used the following ARM template to create that image. This Template creates a new Image Resource and uses the VHD on the existing storage account to create that image.

{
  "type": "Microsoft.Compute/images",
  "name": "RDSH-RDSG-Template-Image",
  "apiVersion": "2016-04-30-preview",
  "location": "[resourceGroup().location]",
  "properties": {
    "storageProfile": {
      "osDisk": {
        "osType": "windows",
        "osState": "Generalized",
        "blobUri": "https://tuie2b2tyw23yrdsrdsh1.blob.core.windows.net/vhds/RDSH2016OSDisk.vhd",
        "caching": "ReadWrite",
        "storageAccountType": "Premium_LRS"
      }
    }
  }
}

The result is a new resource of type image. Since we no longer need the .VHD inside the storageaccount, we can completely remove that Storage Account. The Image is now the only prerequisite, which makes our lives much easier!

clip_image004_thumb[2]
With the Image in place, let’s take a look at how we can tell ARM to create a new Virtual Machine based on Managed Disks and based on this Image resource shown above.

Previously (before Managed Disks) we declared the storage profile of the RDSH Servers as follows. 
"storageProfile": {
  "osDisk": {
    "name": "[variables('virtualmachineosdisk').diskName]",
    "vhd": {
      "uri": "[concat('http://',parameters('existingStorageAccountNameRDSH'),'.blob.core.windows.net/vhds/',parameters('hostNamePrefixRDSH'),'0',copyindex(1),'-',variables('virtualmachineosdisk').diskName,'.vhd')]"
    },
    "osType": "windows",
    "caching": "[variables('virtualmachineosdisk').cacheOption]",
    "createOption": "[variables('virtualmachineosdisk').createOption]",
    "image": {
      "uri": "[parameters('existingCustomImageRDSH')]"
    }
  }
},

With Managed Disks, below is what needs to be changed. We no longer have to define what storage account the VHD resides on. We can simply specify the resourceID of the Image that we created in the previous step. Also, note that this removes the need for the Template Image to be in the same Storage Account as the to be created Virtual Machine, a huge improvement!!
"storageProfile": {
  "osDisk": {
   "name": "[concat(parameters('hostNamePrefixRDSH'),'0', copyindex(1),'-',variables('virtualmachineosdisk').diskName)]",
   "managedDisk": {
     "storageAccountType": "[variables('storage').type]"
   },
   "osType": "windows",
   "caching": "[variables('virtualmachineosdisk').cacheOption]",
   "createOption": "[variables('virtualmachineosdisk').createOption]"
  },
  "imageReference": {
    "id": "[resourceId('Microsoft.Compute/images', parameters('existingCustomImageNameRDSH'))]"
  },
},
The final step we need to perform is to remove the creation for the Storage Accounts inside out ARM template because we no longer need those.

The end result after running the ARM Template is 3 Availability Sets that are configured to house Virtual Machines with Managed Disks, and 6 Managed Disks resources and 1 Image resource that is our template image for the RDSH Servers.
clip_image006_thumb[4]


This concludes our journey of moving from a Storage Account model towards Azure Managed Disks. In my opinion a great new feature for Azure IaaS that provides a lot more flexibility!

Next up in these series on ARM Templates for Azure IaaS, is grouping variables into complex objects, using Vnet & Subnet coming from a separate resource group, resource comments and projects tags!

Stay tuned!