Configure ADFS Integration with AWS Management Console


With the proliferation of public cloud, there comes with it questions around how do you manage authentication and identity to these third party systems. AWS is one of the most popular ones, so today we will walk through how to integrate AWS with ADFS. This article will assume you already have a working ADFS server and AWS account setup. We will walk through configuring it so that two sets of users can login, each with a separate role, or set of permissions.

Active Directory Setup

In your AD, lets configure two different AD groups:





ADFS Metadata

Now, we need to download the metadata file from your ADFS server, this is located at https://[YOURADFSSERVER]/federationmetadata/2007-06/federationmetadata.xml

We will need to upload this file into AWS later on.

AWS Configuration

Next, let’s login to our AWS management console and navigate to the Identity and Access Management console. Now, lets go to Identity Providers on the left hand side and select Create Provider



Set the provider type to SAML, a name to identity your provider, and then upload your FederationMetadata.xml file you download from your ADFS server previously.




Select Next -> Create. Now, select the provider you just created, as you will need to copy the provider ARN of the object, as we will need it for our SAML rules later on:



In our example, lets say that our provider ARN is:


Now, we need to configure the roles that our users will get when they log in. We are going to tie our Active Directory groups created earlier to these roles later in the SAML config. Still in the IAM, navigate to Roles -> Create Role

Select SAML 2.0 Federation, from the dropdown select the identity provider you created earlier, and select Allow programmatic and AWS Management Console access:



Select next, and now we are going to select our roles. This first one will be matched to our AWS-FULLADMINS group so we will select AdministratorAccessImage


Now, lets name the role:


Let’s repeat the process but this time creating the role for AWS-DNSADMINS, this time we will find the AmazonRoute53FullAccess role:


Now, for each of the roles you created, select them, and copy the ARN similar to how we did it for the identity provider previously:



Mine map out like this:

AWS-FULLADMINS -> arn:aws:iam::123456789012:role/AWS-FULLADMINS

AWS-DNSADMINS -> arn:aws:iam::123456789012:role/AWS-DNSADMINS

At this point we are done with the AWS config, we are ready to create our relying party trust in ADFS and configure the necessary claim rules.

ADFS Configuration

In ADFS navigate to Relying Party Trust -> Add Relying Party Trust -> Claims AwareImage


Select Import data about… and use the URL



Set the display name to anything you want:



Set your Access Control Policy accordingly:


Select Next -> Finish

If you didn’t have selected to Edit Claim Issuance policy, highlight the Amazon Web Services trust, and select Edit Claim Issuance Policy


Click Add Rule -> Transform an Incoming Claim



Claim Rule Name -> Name ID

Incoming claim type -> Windows account name

Outgoing claim type -> Name ID

Outgoing name ID format -> Persistent Identifier


Next, create another rule -> Send LDAP Attributes as Claims



Claim rule name -> Session Name

Attribute Store -> Active Directory

LDAP Attribute -> User-Principal-Name

Outgoing Claim Type ->



Note the outgoing claim type IS case sensitive.

Now we need to create our rules to map our groups in AD to AWS roles. Here I will deviate from the AWS instructions and do things a little bit differently. I find this easier than the AWS instructions which require the groups and roles names to match and relies on REGEX to link the two inside the SAML language. The downside is that it can require more rules in the Claim Issuance Policy, and requires more steps to setup.

First, we are going to create a temp rule that we will use to build the initial language we need for our rule.

Select Add Rule -> Send Group Membership as a Claim



Name the rule whatever you need, we will delete this after. The only important piece here is to select the users group. For outgoing claim type and value, it does not matter:



Now, select Finish -> Highlight the rule you just created and click Edit -> View Rule LanguageImage

Copy the language here to notepad:



You want to replace this line:

=> issue(Type = "", Value = "admin", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, ValueType = c.ValueType);

with the following:

=> issue(Type = "", Value = "arn:aws:iam::123456789012:saml-provider/Port25Guy,arn:aws:iam::123456789012:role/AWS-FULLADMINS");

You should replace this section:

Value = “arn:aws:iam::123456789012:saml-provider/Port25Guy,arn:aws:iam::123456789012:role/AWS-FULLADMINS”

with the ARN of your Identity Provider and the ARN of your role that you copied from AWS previously.

Now, your complete rule should look like the following:

c:[Type == "", Value == "S-1-5-21-1693000147-1615933772-1549220743-5440", Issuer == "AD AUTHORITY"]
=> issue(Type = "", Value = "arn:aws:iam::123456789012:saml-provider/Port25Guy,arn:aws:iam::123456789012:role/AWS-FULLADMINS");

Essentially the top half of the rule is seeing if your a member of the AD group via SID, and then if you are, sending a claim to AWS that has the identity provider ARN and role ARN in the claim.

Repeat this process for your second AD group and AWS role, generating the custom rule language. Save the language as we will need it right now.

Delete your temp rule, it is no longer needed.

Select Create Rule -> Send Claims Using a Custom Rule



Name the Claim Rule for the appropriate role, then paste in the custom language you created from above:


Repeat the above step for the AWS-DNSADMINS role and AD group:



Your Issuance Transform Rules should look similar to the above:



Testing Access

First, you need to make sure that you have the IDP Initiated Logon page available on ADFS, you can do that with the following powershell command:

Set-AdfsProperties -EnableIdPInitiatedSignonPage $true

Now, go to the following URL:


Select Sign In:



Then select Amazon Web Services



You should get redirected to the AWS console page:



It will list that it is a Federated Login, as well as the identity provider/username as your session name. You are all set!


Logging in without IDP Portal Page:

ADFS has an annoying configuration in that the IDP login page lists anonymously the resources available through that server:



For this reason, many organizations disable this page, in fact Windows Server 2016 disables it by default. However, AWS doesn’t seem to allow any type of SP initiated logon. So are you out of luck? Nope, we just need to use a special URL to do so.

In ADFS, navigate to the AWS relying party trust, go to Properties->Identifiers

Note the relying party identifiers, which in our case is urn:amazon:webservices.

Now use the below URL, obviously entering in your ADFS FQDN:


Notice after the loginToRP= we enter in the identifier we copied from above. After your ADFS server authenticates you, you’ll be automatically logged into the AWS console, while having the IDP portal page disabled!

How to Bypass MFA for Autodiscover and Activesync in Windows Server 2016 Using Access Control Policies

ADFS, Client Access, MFA, Office365

Had trouble finding any info on this besides using the version of ADFS that comes with 2012 R2, configuring the exceptions through powershell. In ADFS in Windows Server 2016, you can know utilize Access Control Policies to configure rules around how users authenticate to ADFS. In our setup, we have a classic example where when client’s are in the office, they should automatically login using Windows Integrated Authentication (essentially that they are not prompted for credentials). When the users are not on the corporate network, they should be forced to utilize Multi-Factor Authentication (MFA for short).

Note there are some requirements for this setup.

  • You need to have ADFS deployed utilizing an ADFS proxy server to the internet (or some other proxy that can add the required headers to the internet based requests)
  • If your using multi-factor authentication it is assumed you are using Modern Authentication on your Outlook Clients

If we open up ADFS MMC, navigate to Access Control Policies:



There are several pre-canned policies on the left, but we are going to create our own by clicking Add Access Control Policy in the upper right hand.

Give the policy a meaningful name and description, and then build your policy as follows:




Note that it works similar to a firewall rule. We have the most restrictive policy at the top. Meaning if a user is a member of the AD Group DualAuth in this case, and they are logging in from outside the corporate network, they will be forced to use multi-factor authentication. The processing of rules for that user will stop. The Permit Users is required for ALL other users to be able to login without issue from the Internet, but also to allow ALL users to login using Windows Integrated Authentication from within the corporate network.

The initial problem with this policy is that not all applications have the ability to perform multi-factor authentication. The classic ones or Exchange Activesync and Exchange Autodiscover. So we need to exclude those from this processing.

If we select our initial rule block and click edit, we can select under the except tab, the with specific claims in the request checkbox.



Next, click on the link in the word specific. Select the claims radio button. Under claim type, change this to read client application -> contains -> Microsoft.Exchange.Autodiscover. Add another row and change the Claim Value to be Microsoft.Exchange.Activesync.



Hit okay, save your changes and your good to go!

How to Troubleshoot SAML Login’s with Office 365 and Fiddler

Office365, SAML

Recently had an issue where a user could not log into his Office 365 account. The organization was using Azure AD Sync and SAML based login. The user could successfully authenticate to the ADFS login, but once they tried to access the Office 365 resources they would receive an error. Below is a screen shot of the error when they tried to login to OWA.



Troubleshooting SAML can be a bit complicated, so I’ll go through some of the steps that I used to determine what the SAML server was providing to Office 365, and where the breakdown was in this specific case.

First, I downloaded and installed Fiddler on a test machine –

After the install, I needed to configure Fiddler to decrypt my HTTPS traffic.

Go to Tools -> Options->HTTPS

Ensure that Decrypt HTTPS traffic and Ignore server certificate errors are selected



You will get some warnings about trusting the local certificate. You want to click accept and yes to this warnings. What happens is Fiddler acts as a Man in the Middle proxy so that it can decrypt your traffic. You allowing it to install a certificate to the machines local store so that it trusts this certificate. You can view it in Local Computer -> Trusted Root Certificate Authorities



Next, open your web browser (I use Edge for this since it utilizes the local machine’s certificate store) and open it to a blank page. In Fiddler, clear the tracing results that are already present:



Now, browse to the OWA site for your organization in Office 365. Its in the format So for my test I’m using This should land you on the login page for your SAML provider. Log in as normal:


Next, get to a place where your login breaks, or you get the error, in our case, it is the original error message:



In Fiddler, go to File -> Capture Traffic to stop the capture.


Now, in the results section, find the last entry that has your SAML host name, and the URL beginning with /adfs/ls. In our example below we are looking at line 41.


We want line 41 because this is the last entry from our SAML provider before we are pushed back to so this has our SAML session ticket in the response.

Select that entry, and on the right hand side, select Inspectors along the top row, and Raw along the bottom row:


Now copy out the line underneath Content Length to a text editor


You will have a big blog of text. In the first few lines, find the entry name=”wresult” value=


You want to copy everything AFTER the equals sign in value:


Until the string ends with RequestSecurityTokenResponse>”


In our example, that leaves us with the following string (adjusted for formatting reasons):

“<t:RequestSecurityTokenResponse xmlns:t=""><t:Lifetime><wsu:Created …/t:RequestSecurityTokenResponse>”

So the string you have, isn’t that nice or easy to ready, so lets run it through some HTML decoders.

I used the one on –


Copy the decoded result to a text editor. Now we want to format that for XML. I used the one from WebToolKitOnline –


Okay, now we have readable XML data that we can paste into our favorite XML viewer.

So for us, we can now see the SAML response that we are sending to Office 365. In our case we are interested in the UPN attribute and the Immutable ID values:


We suspected that the Immutable ID we were getting from ADFS was not matching with the Immutable ID we have stored in Office 365. We can see here that the values we are getting from ADFS are:


ImmutableID – xPPblRWKMkqRDhd6jO4O4Q==

So if we check in Microsoft Online we can get the UPN and ImmutableID through powershell:

Get-Msoluser -UserPrincipalName | select userprincipalname,immutableid

This gives us the following output:


We can see that the ImmutableID value’s do not match between ADFS and Office 365. In this case, the client was syncing a custom attribute that was not converting the value correctly. They fixed their back end process and then the user was able to log in.

I hope this article helps some of you with using Fiddler to troubleshoot SAML login issues with ADFS.

Health Mailboxes Not Being Created in Exchange 2013 / Exchange 2016 Because of Missing Email Address Policy



Had an issue where the monitoring mailboxes were not being created for our Exchange organization.  Starting with Exchange 2013 CU1, these mailboxes are created in the OU Monitoring Mailboxes under the Microsoft Exchange System Objects OU:


Previous to that, they were created in the default Users folder in Active Directory.

I had tried several things, I restarted the Health Manager service on all my Mailbox servers, I re-ran setup.exe /PrepareAD from the installation files, checked to ensure that the Exchange servers had permissions to that OU, deleted the Monitoring Mailboxes OU and re-ran Setup.exe /PrepareAD.  Nothing worked.  Deleting the Monitoring Mailboxes OU and re-running Setup.exe /PrepareAD did recreate the Monitoring Mailboxes OU, but there were still no Health Mailboxes.

The fix for this was really simple.  The default email address policy in this environment was disabled, or scoped to only apply to a very specific OU that didn’t include the Microsoft Exchange System Objects->Monitoring Mailboxes OU. 

We created a new email address policy that did apply to that OU:


After I created it, I ensured that it was applied:


After that, I restarted the Microsoft Exchange Health Manager service on all Mailbox servers and low and behold!  All monitoring mailboxes were created!

Essentially if you do not have an email address policy that applies to the Health Monitoring mailboxes, they will not be created.  The fix was simply, create and apply one to that OU.

Migrating Public Folders to Exchange 2016



In this article we will go through the steps to migrate public folders from Exchange 2010 to Exchange 2016.  Keep in mind this same process works for migrating to Exchange 2013 as well, the steps are identical.

Change In Architecture

Let’s start first with the changes in the public folder setup that began in 2013, what Microsoft calls “Modern Public Folders”.  Legacy Public Folders, or public folders from a version of Exchange previous to 2013 went largely unchanged.  There were public folder databases that were stored on a mailbox server.  To provide high availability and redundancy, you could create public folder databases on multiple mailbox servers.  Then, on a per folder basis establish replica’s of that folder so that copies were on more than one server.

Anyone that has dealt with legacy public folder replication issues can tell you the process is not that fun to troubleshoot.  It’s also not exact, as the quickest the folders can replicate is 15 minutes.  This can certainly cause issues if you have a group of users collaborating and they are each connected to a separate copy of the public folder data, there can be delays or collisions in the data sets.  Also, public folder databases could not be protected in a DAG, so they were not able to benefit from the same availability designs as normal mailboxes.

Exchange 2013 change the architecture so that public folder data is now stored in mailbox’s designated as “Public Folder Mailboxes” instead of using dedicated public folder databases.  The benefit of this model was that public folders could not be placed into normal user mailbox databases, which would mean they not benefit from the DAG architecture.  Also, there is now only one writeable copy of the folder, so data integrity is more accurate. 

This means that there is a migration process that must occur from the legacy public folders to the modern public folders.  What we will aim to do in this article is highlight how we do that migration, and also how the user connections to the modern public folder changes as well.

For this article we will leverage a simple two server architecture.

PHDC-SOAEXC01 – 2010 Multirole Server hosting Legacy Public Folders

PHDC-SOAE16MBX2 – 2016 Server that will host the mailbox databases that will contain the Modern Public Folders

Download the Migration Scripts

First we need to download the pre-built migration scripts from Microsoft.  Download them to C:\PFMigration on both your legacy public folder server, and your 2013 Mailbox Server

Run Export Commands for Reference:

On the Legacy Public Folder Server, run the following commands:

Get-PublicFolder -Recurse | Export-CliXML C:\PFMigration\Legacy_PFStructure.xml
Get-PublicFolderStatistics | Export-CliXML C:\PFMigration\Legacy_PFStatistics.xml
Get-PublicFolder -Recurse | Get-PublicFolderClientPermission | Select-Object Identity,User -ExpandProperty AccessRights | Export-CliXML C:\PFMigration\Legacy_PFPerms.xml

This will allow you to backup the folder structure, statistics and the client permissions for reference.  This will be useful if you find your migration does not go well and you need to reference how things looked before the migration.

Check Folder Names

We need to check if any folder in the hierarchy has a backslash “\” in its name.  If it does, it will cause the migration to inadvertently place that folder in the root.  For example, if we had a folder named named “P/E Assets” that was a subfolder of the top level folder “Departments”, the migration would cause the P/E Assets folder to be placed in the root directory alongside Departments instead of under it.  To check, run the following command on the Legacy Public Folder Server:

Get-PublicFolderStatistics -ResultSize Unlimited | Where {$_.Name -like "*\*"} | Format-List Name, Identity

Manually rename any folder found here to something without the backslash character.

Generate CSV Files

Next, we need to use the downloaded scripts from Microsoft to generate two CSV files.  The first one will be PublicFolderStatistics.CSV.  This will simply list all of the folders and their respective sizes.

In the Exchange Management Shell on the Legacy Public Folder server, navigate to the C:\PFMigration folder and run the below command:

.\Export-PublicFolderStatistics.ps1  c:\PFMigration\PFFolderStatistics.csv legacyservername

Replace LegacyServerName with the name of your legacy public folder server, such as:


The script will output the number of folders and their statistics to a CSV file, like below:


It will list the path, and the folder size on the right hand side.

The second script we run, is the Mailbox Mapping script.  What this does is take the PFFolderStatistics.csv file as an input, and also the max size you want a public folder mailbox to be as an input and tells you how many Public Folder Mailboxes you need, and what their names should be.

For instance, in our environment, we don’t want any Public Folder mailbox to be over 1 GB in size.  So we run the following command:

.\PublicFolderToMailboxMapGenerator.ps1 1073741824 c:\PFMigration\PFFolderStatistics.csv c:\PFMigration\PFMailboxMapping.csv

The first input is the max size of each Public Folder Mailbox in bytes.  That is our 1073741824 value.  Note that we are doing this in a lab and in real applications, you want your Public Folder mailbox max size to be something larger.  The next input is the path to our PFFolderStatistics.csv file we generated above.  Finally, we put the path that the script should output our PFMailboxMapping.csv file to.  Running this command looks like the following:


Note that if your Max Public Folder Mailbox size is smaller than the any one folder in and of itself this script will return an error.  Simply increase the max size until the error stops appearing.

The CSV file will look like the following:


Notice the left hand side is the folder paths, and on the right hand side is the corresponding Public Folder Mailbox that they will be migrated into.  The script uses generic names, so we can see, its lists Mailbox1 through Mailbox10.  This means I need at least 10 Public Folder Mailboxes for this migration.

Also to note, the names of the Public Folder Mailboxes are important.  If we leave the CSV file as is, our Public Folder Mailbox name has to match these.  In our case, we want to change our structure.  We want the following Public Folder Mailboxes:











Note that the first Public Folder Mailbox created will be known as the root public folder mailbox and will be the mailbox responsible for keeping the hiearchy.  The rest will replicate the hiearchy from this mailbox, but are available for data to be stored in.  Since we are changing the names, we need to manually edit the CSV file to reflect that.  Note that the root public folder mailbox must be assigned to the folder “\”.

After editing our CSV file looks like this:


Create the Public Folder Mailboxes

On the Exchange 2016 server, create the public folder mailboxes, ensuring they have the same name as in the CSV file:

New-Mailbox PublicFolder-RootMailbox -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-1 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-2 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-3 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-4 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-5 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-6 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-7 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-8 -PublicFolder -HoldForMigration
New-Mailbox PublicFolder-NY-9 -PublicFolder -HoldForMigration

This is important, ensure that you have an Email address policy that applies to the public folder mailboxes.  If they do not receive an email address, the migration will fail, but also users wont be able to connect to them later on.  We’ll explain why.

Start the Migration:

Copy the contents of C:\PFMigration to the same location on your Exchange 2016 server.  From an Exchange Management Shell, run the following command:

New-MigrationBatch -Name PFMigration -SourcePublicFolderDatabase (Get-PublicFolderDatabase -Server LegacyServerName) -CSVData (Get-Content C:\PFMigration\PFMailboxMapping.csv -Encoding Byte) -NotificationEmails -BadItemLimit 1000 

Replace LegacyServerName with the hostname of your 2010 Public Folder server, and with the email address of who should get notified when the migration is done.  The command in our set up looks like this:


Next, we can start it from the command line with:

Start-MigrationBatch PFMigration


Of by logging into ECP and starting it from there:


Now we wait for the Migration to get to the status of “Synced” and then we are ready to look at doing the final cutover.  Notice there is a migration for each public folder mailbox, in our case a total of 10.  

Also remember that Migration Batch’s will do incremental sync’s every 24 hours so you can start the initial sync in advanced and let the system sync up every 24 hours.

Lock Down Migration (Downtime Starts!)

The start of this next section is when the users will no longer be able to connect to Public Folders through Outlook.  They will receive an error if they try to expand the public folder section in outlook.

Once the migration reaches a status of Synced your ready for the next step:


Run the following command to lock the public folders for migration, this needs to be run on the Legacy Exchange Server:

Set-OrganizationConfig -PublicFoldersLockedForMigration:$true


If you have multiple Legacy Exchange Servers, this can take some time to complete.  During this time, all emails destined for Mail Enabled Public folders will queue until the migration is completed.

Next you need to run the following commands to complete the Public Folder sync.  Note this will not yet release the folders for the end users:

Set-OrganizationConfig -PublicFoldersEnabled Remote
Complete-MigrationBatch PFMigration


Or you can complete the MigrationBatch in the GUI:



Testing the Folder Hierarchy

Once the migration completes, you can unlock the hierarchy for a test user, just to validate the folder structure is there:

Set-Mailbox -Identity -DefaultPublicFolderMailbox PublicFolder-RootMailbox


If the folder structure looks good and your ready to unlock for everyone, run the below set of commands:

Get-Mailbox -PublicFolder | Set-Mailbox -PublicFolder -IsExcludedFromServingHierarchy $false
Set-OrganizationConfig -PublicFolderMigrationComplete:$true
Set-OrganizationConfig -PublicFoldersEnabled Local


Modern Public Folder Connectivity

This brings us to an important part of the process, discussing the connectivity changes for Modern Public Folders.  In previous versions of Exchange, an outlook client made a direct connection to the Public Folder Server.  Your connection logic was handled by the Exchange Server that accepted your connection for your mailbox, informing you of how to connect to public folders.  Well, starting with Exchange 2013 all traffic was forced through either Outlook Anywhere or MAPI/HTTP.  With Modern Public Folders, all connections to public folders, since they are essentially mailboxes, are handled just like a users mailbox, through autodiscover!

Taking our example public folder mailboxes from above:

Notice how they each have an email address of  Now lets say that we have a user, and their email address is  When this user connects to their mailbox, their Outlook will actually perform two autodiscover requests.  One to the domain, and the second to the domain.  This is why you need to ensure the public folder mailboxes have a valid SMTP address, but also that you have valid autodiscover records for that domain.  Remember, if you set the email address to something internal, external users will not be able to access public folders!

If we use the Test Email Auto Configuration tools on our Outlook and check the XML value we will see the public folder mailbox that we our getting in our autodiscover response to connect to:



Cannot Update Exchange 2013 after Installing Exchange 2016 BETA

Exchange 2013, Exchange 2016, Role Based Administration

Recently ran into an issue where I couldn’t update my lab Exchange 2013 CU9 servers to Exchange 2013 CU10.  I wanted to do so because Exchange 2016 had gone RTM and one of the requirement’s for coexistence of Exchange 2016 and Exchange 2013, is for Exchange 2013 to be running at CU10.  One of things to note is that I had previously installed the Exchange 2016 BETA into my lab setup.

The error I got from the Exchange setup program was:

[11/06/2015 21:40:05.0247] [2] [ERROR] The given key was not present in the dictionary.
[11/06/2015 21:40:05.0247] [2] [WARNING] An unexpected error has occurred and a Watson dump is being generated: The given key was not present in the dictionary.
[11/06/2015 21:40:06.0122] [1] The following 1 error(s) occurred during task execution:
[11/06/2015 21:40:06.0122] [1] 0.  ErrorRecord: The given key was not present in the dictionary.
[11/06/2015 21:40:06.0122] [1] 0.  ErrorRecord: System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
   at Microsoft.Exchange.Data.Directory.SystemConfiguration.ExchangeRole.StampImplicitScopes()
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.PrepareRoleForUpgradeAndGetOldSortedEntries(ExchangeRole roleToUpgrade, Boolean isDeprecated)
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.UpdateCannedRole(ExchangeRole existingRole, ExchangeRole cannedRole, RoleDefinition roleDefinition)
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.CreateOrUpdateRole(RoleNameMapping mapping, RoleDefinition definition, List`1 enabledPermissionFeatures, String suffix, String mailboxPlanIndex)
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.CreateOrUpdateRole(RoleNameMapping mapping, RoleDefinition definition, List`1 enabledPermissionFeatures)
   at Microsoft.Exchange.Management.Tasks.NonDeprecatedRoleUpgrader.UpdateRole(RoleDefinition definition)
   at Microsoft.Exchange.Management.Tasks.InstallCannedRbacRoles.UpdateRolesInOrg(RoleNameMappingCollection mapping, RoleDefinition[] roleDefinitions, ServicePlan servicePlan)
   at Microsoft.Exchange.Management.Tasks.InstallCannedRbacRoles.InternalProcessRecord()
   at Microsoft.Exchange.Configuration.Tasks.Task.<ProcessRecord>b__b()
   at Microsoft.Exchange.Configuration.Tasks.Task.InvokeRetryableFunc(String funcName, Action func, Boolean terminatePipelineIfFailed)
   at Microsoft.Exchange.Configuration.Tasks.Task.ProcessTaskStage(TaskStage taskStage, Action initFunc, Action mainFunc, Action completeFunc)
   at Microsoft.Exchange.Configuration.Tasks.Task.ProcessRecord()
   at System.Management.Automation.CommandProcessor.ProcessRecord()
[11/06/2015 21:40:06.0169] [1] [ERROR] The following error was generated when “$error.Clear();
          if ($RoleDatacenterFfoEnvironment -eq “True”)
            Install-CannedRbacRoles -InvocationMode $RoleInstallationMode -DomainController $RoleDomainController -IsFfo
            Install-CannedRbacRoles -InvocationMode $RoleInstallationMode -DomainController $RoleDomainController
        ” was run: “System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
   at Microsoft.Exchange.Data.Directory.SystemConfiguration.ExchangeRole.StampImplicitScopes()
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.PrepareRoleForUpgradeAndGetOldSortedEntries(ExchangeRole roleToUpgrade, Boolean isDeprecated)
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.UpdateCannedRole(ExchangeRole existingRole, ExchangeRole cannedRole, RoleDefinition roleDefinition)
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.CreateOrUpdateRole(RoleNameMapping mapping, RoleDefinition definition, List`1 enabledPermissionFeatures, String suffix, String mailboxPlanIndex)
   at Microsoft.Exchange.Management.Tasks.RoleUpgrader.CreateOrUpdateRole(RoleNameMapping mapping, RoleDefinition definition, List`1 enabledPermissionFeatures)
   at Microsoft.Exchange.Management.Tasks.NonDeprecatedRoleUpgrader.UpdateRole(RoleDefinition definition)
   at Microsoft.Exchange.Management.Tasks.InstallCannedRbacRoles.UpdateRolesInOrg(RoleNameMappingCollection mapping, RoleDefinition[] roleDefinitions, ServicePlan servicePlan)
   at Microsoft.Exchange.Management.Tasks.InstallCannedRbacRoles.InternalProcessRecord()
   at Microsoft.Exchange.Configuration.Tasks.Task.<ProcessRecord>b__b()
   at Microsoft.Exchange.Configuration.Tasks.Task.InvokeRetryableFunc(String funcName, Action func, Boolean terminatePipelineIfFailed)
   at Microsoft.Exchange.Configuration.Tasks.Task.ProcessTaskStage(TaskStage taskStage, Action initFunc, Action mainFunc, Action completeFunc)
   at Microsoft.Exchange.Configuration.Tasks.Task.ProcessRecord()
   at System.Management.Automation.CommandProcessor.ProcessRecord()”.
[11/06/2015 21:40:06.0169] [1] [ERROR] The given key was not present in the dictionary.
[11/06/2015 21:40:06.0169] [1] [ERROR-REFERENCE] Id=361422192 Component=
[11/06/2015 21:40:06.0169] [1] Setup is stopping now because of one or more critical errors.
[11/06/2015 21:40:06.0169] [1] Finished executing component tasks.
[11/06/2015 21:40:06.0201] [1] Ending processing Install-ExchangeOrganization
[11/06/2015 21:40:06.0201] [0] CurrentResult console.ProcessRunInternal:198: 1
[11/06/2015 21:40:06.0201] [0] CurrentResult launcherbase.maincore:90: 1
[11/06/2015 21:40:06.0201] [0] CurrentResult console.startmain:52: 1
[11/06/2015 21:40:06.0201] [0] CurrentResult SetupLauncherHelper.loadassembly:452: 1
[11/06/2015 21:40:06.0201] [0] The Exchange Server setup operation didn’t complete.  More details can be found in ExchangeSetup.log located in the <SystemDrive>:\ExchangeSetupLogs folder.
[11/06/2015 21:40:06.0216] [0] CurrentResult 1
[11/06/2015 21:40:06.0216] [0] CurrentResult setupbase.maincore:396: 1
[11/06/2015 21:40:06.0216] [0] End of Setup
[11/06/2015 21:40:06.0216] [0] **********************************************


I opened up ADSIEDIT and navigated to the domain root->Configuration->Services->Microsoft Exchange->Organization Name->RBAC->Roles->Apps




Under Roles, there will be an existing entry named My ReadWriteMailbox Apps.


Delete this entry, or rename it (I renamed it to My ReadWriteMailbox Apps2).


After that, I re-ran Exchange 2013 CU10 Setup and everything completed as expected.  If you renamed the object above, you’ll see that setup recreated the proper role:




Notice I have my duplicated entry that I renamed alongside my newly created one.

Configuring Cross Organization Free Busy Information in Exchange 2013


There are many times when you will have an outside organization that you wish to share free busy information with.  Starting with Exchange 2010, Microsoft introduced the ability to utilize their Microsoft Federation Gateway to facilitate the sharing of free/busy information across multiple organizations.  While that solution works really well, there are times when that might not be the best solution.  For instance, you may have a customer in the middle of a large scale migration, or a partner company that you have connectivity to.  In these situations, you can utilize the direct method of establishing and configuring sharing free busy information.  This method, does require the two exchange organizations to have some level of connectivity to each other.  There are two direct methods that you can use. 

You can leverage the forest trust model, which indicates that you have a forest between between the two Exchange Organizations.  This would mean that you have a high level of network connectivity and integration between the two organizations.  The benefit to this method is that you can assign per user based permissions.  Meaning Paul from Forest A can have expanded rights to Jon from Forest B’s mailbox for free and busy. 

The second option is to use the non trusted forest based model.  This means that there is no forest trust between the networks.  Instead a general service account is leveraged to secure the permissions.  This means that the free and busy info is assigned organization wide, meaning you lack the granularity to assign different permissions per user.

I’ll walk through and explain both methods.  For our lab, we have the E15.corp forest, running Exchange 2013, and SOA.corp running Exchange 2013.

Trusted Forest Mode:

So the first thing we need to do is assign the mailbox servers in each forest to have the ms-exch-epi-token-serialization right on the remote forest’s mailbox servers.  If this is an Exchange 2010 or Exchange 2007 environment, change out mailbox servers for cas servers.

In the SOA.corp forest run:

Get-MailboxServer | Add-ADPermission -Accessrights Extendedright -Extendedrights “ms-Exch-EPI-Token-Serialization” -User “E15.corp\Exchange servers”


In the E15.corp forest run:

Get-MailboxServer | Add-ADPermission -Accessrights Extendedright -Extendedrights “ms-Exch-EPI-Token-Serialization” -User “soa.corp\Exchange servers”


So now, the Mailbox Servers from each forest, have the permission’s necessary. 

Next, in each forest, you need to add the availability space so that Exchange knows to look up that space for Availability.  Here is also where we will tell Exchange that we will use the Per User format.  In the SOA.corp domain run:

Add-AvailabilityAddressSpace -Forestname e15.corp –AccessMethod PerUserFB -UseServiceAccount:$true


For the ForestName parameter, mine is E15.corp because it’s a test lab, but this should be the SMTP namespace of the far side domain.  So if the far side is, replace my E15.corp with

Then, in the E15.corp domain run the following:

Add-AvailabilityAddressSpace –Forestname soa.corp –AccessMethod PerUserFB -UseServiceAccount:$true



Next, we need to tell the soa.corp domain how to retrieve free and busy information from the e15.corp domain.  By default, it will use DNS and attempt autodiscover.e15.corp.  So ensure your source forests can resolve that address, and that they trust the certificates of the far side domain.

You can also export the Service Connection Point from one side and import it into the other.

In the E15.corp domain, to export it’s autodiscover information to the Soa.corp domain, run:

Export-AutodiscoverConfig -TargetForestDomainController “phdc-soadc01.soa.corp” -TargetForestCredential (Get-Credential) -MultipleExchangeDeployments $true

Replace the TargetForestDomainController value with the FQDN of your far side domain controller.  And then enter in credentials in the destination forest:



In the SOA.corp, run the same command but specify it for the E15.corp domain:””

Export-AutodiscoverConfig -TargetForestDomainController “phdc-e15dc01.e15.corp” -TargetForestCredential (Get-Credential) -MultipleExchangeDeployments $true


You can check the SCP in your local domain by opening ADSIEDIT and connecting to the Configuration Partition, then browsing to Services->Microsoft Exchange Autodiscover

This is a screen shot from soa.corp showing the SCP info for E15.corp:


Untrusted Forest Mode:

If you don’t have a forest trust between the two forests, you can still do free busy, but you can only share it on an organization wide basis.

In the E15.corp forest, create a user named FreeBusyE15@e15.corp.  This can be a regular user account with no mailbox.

In E15.corp run the following:

Set-AvailabilityConfig –OrgWideAccount e15.corp\freebusye15

In the SOA.corp forest, create a user named FreeBusySOA@soa.corp.  This can be a regular user account with no mailbox.

In SOA.corp run the following:

Set-AvailabilityConfig –OrgWideAccount soa.corp\freebusySOA

Next, in the E15 forest, run the following command to configure the availability service to SOA.corp.  This will leverage the account in the SOA.corp domain to do so:

$a = Get-Credential soa.corp\freebusySOA
Add-AvailabilityAddressSpace –Forest Soa.corp –AccessMethod OrgWideFB –Credentials $a

Next, in the SOA forest, run the following command to configure the availability service to E15.corp.  This will leverage the account in the SOA.corp domain to do so:

$a = Get-Credential E15.corp\freebusySOA
Add-AvailabilityAddressSpace –Forest E15.corp –AccessMethod OrgWideFB –Credentials $a

Syncing the GAL:

So, you need to sync the GAL to make contact objects available across both domains.  Meaning, the Mailboxes of E15.corp get created as mail contacts in SOA.corp and vice versa.  This allows your users to select them from the GAL and perform free/busy lookups.

For the sake of our article, I’ll manually create a mailbox called pponzeka@e15.corp in E15.corp and then manually create a mail contact called pponzeka@e15.corp in SOA.corp.  Then I’ll test with an SOA mailbox to see if I can perform cross forest free busy.

In the pponzeka@e15.corp mailbox, ill create a meeting for tomorrow between 3 PM and 5 PM named E15 Corporate Meeting:



We open Outlook for the user very imaginatively named, Soa-User1, go to create a new appointment and select the Scheduling Assistant.  If we click Add Attendees and see that we have a Paul Ponzeka with the globe icon, indicating he is a Mail Contact:


And confirmed!  Working cross forest free/busy:



Exchange 2013 Error: This computer is a member of a cluster


Had an issue in the lab today.  Went to uninstall an Exchange 2013 server that I had previously removed from a DAG, yet when I went to uninstall Exchange 2013, kept getting an error message that “this computer is a member of a cluster”, even though it was not.  It was an easy enough fix, ran cluster.exe node /force and then tried the uninstall again.

Worked like a charm.

Configure an Exchange 2013 DAG on Windows Server 2012 R2 With No Administrative Access Point

DAG, Exchange 2013, High Availability

Exchange 2013 SP1 introduced support for Windows Server 2012 R2, and also introduced support for a new feature in Windows Server 2012 R2, Failover Clusters without an Administrative Access Point.  You can now create a DAG, that does not need separate IP’s on each subnet for the DAG itself.  It also no longer creates the CNO which is seen as the computer account in Active Directory.  The benefit of this feature is that you reduce complexity, no longer need to manage the computer account for the DAG, and no longer need to assign IP addresses for each subnet on which the cluster operates.  There are some downsides, but it shouldn’t affect Exchange admins much.  Mainly, since there is no ip address and no CNO, you cannot leverage Windows Failover Cluster admin tools to connect to it.  You need to leverage local PowerShell against a cluster node directly.  With Exchange, this shouldn’t be too much of a problem as almost all of the management of the cluster is handled with Exchange tools through management of the DAG itself.

In our example, we have two servers in separate AD sites that we are going to configure in our DAG:



We will create a DAG named SOA-DAG-2013.  Now, previously this would be the name of the CNO that Exchange would create underneath.  This is changed to essentially be a label that is stamped on all the nodes for management, but will no longer create the CNO.

If we login to EAC and navigate to Servers->Database Availability Groups, we can create the DAG by click on the plus sign:


Enter in the information for the DAG, and remember to specify your Witness Server.  It should be another Exchange 2013 Server in your primary datacenter location that is not also a member of the DAG.  We will specify one IP address of


If we are doing this in PowerShell, the syntax is different:

New-DatabaseAvailabilityGroup –Name SOA-DAG-2013 –DatabaseAvailabilityGroupIPAddresses ([System.Net.IPAddress]::None) –WitnessServer NYDC-SOAE13CAS1.soa.corp –WitnessDirectory c:\WitnessDirectory\SOA-DAG-2013




Now, from here, building the DAG should have the same steps.  Lets add the mailbox servers to the DAG.  If you don’t already have Windows Failover Clustering installed, these steps will install it for you.

From the EAC:, under Database Availability Groups select the DAG name, and click on the Server with the gearbox icon:












Add your servers to the DAG and click Save:



From the Exchange Management Shell:

Add-DatabaseAvailabilityGroupServer -Identity SOA-DAG-2013 -MailboxServer SFDC-SOAE13MBX2


And your all set.  The DAG has been configured with no Administrative Access. 

If we check the properties of the DAG in the EAC we can see the IP address is listed as



And even though we had that string in the PowerShell command, if we check the IP address in PowerShell, we only have listed as an IP address:



Exchange 2010 Archive Mailbox and Retention Policies–Part 2

DAG, Exchange 2010

We’ll really long time in the making, but one of my most popular articles.  With 2013 out there, I figured I would finish this off, and then add a part 3 that shows a quick rundown of how to do the same thing with 2010. 

So, we have our Archive Mailbox created.  Now, we want to assign a policy so that we perform some automated action, and give users the ability to also make some changes.  There are a ton of posts out there over the mechanics of how the Exchange Archive system works.  I wont revisit it.  Ill try to do so with a more real world example.  So for this example, we want to assign our users with a policy that performs the following:


  1. Users should have the ability to tag emails to move to archive ASAP
  2. Users should have the ability to tag emails to move to archive if they are older than 30 days
  3. Users should have the ability to tag emails to be deleted older than one week
  4. Users should have the ability to mark emails to never be archived
  5. All Emails in the sent items are deleted after 30 days
  6. All Emails older than 90 days are automatically moved to archive if another policy doesn’t apply

It should be noted, that the delete actions and never delete actions work on any mailbox, and the Archive options require an archive mailbox to be enabled for the user.  If an archive mailbox is not enabled, the archive policies have no effect.

Now, if you look at the above, a common question that pops up is around the Never Archive option.  If they have this ability, won’t they be able to completely override the archive setup and store everything in their mailbox?  The answer is technically yes, but if you combine your archive mailbox’s with mailbox limit’s, then the user will hit a point where they can no longer send and/or receive messages, and are forced to archive messages. 

So, next we need to create the Archive Policy and the Archive Tag’s.  Real quick, each email or folder can only have one “tag” assigned to them.  Email’s and folders inherit their parent folder’s tag, but it can be overridden.  The process that handles processing the tags on items is the Managed Folder Assistant. The Assistant checks each item for tags.  If the item doesn’t have a tag explicitly set on it, then the assistant check’s the parent folder for the appropriate tag.  Once it finds a tag, it takes that action on it according to that tag.  So, lets create the needed tag’s for our example above.  Navigate to Organization Configuration->Mailbox->Retention Policy Tags.  Click New Retention Policy Tag and you’ll be presented with the following screen:


So, let’s create the first tag of move items to archive ASAP.  Since there is no ASAP, we will set the Age Limit to 1 day, and change the action to be Move to Archive. The next thing to change is the Tag Type.  If you are giving the users the options to set the tag themselves, it should always be a Personal Tag.  The other tag’s are scoped to a specific folder type.  We will cover this later.  So our configuration looks like the following:


Create the rest of the tags, which should be the same settings, just a different name and age limit.  The only one that is different is the Never Archive.  Here is the config for that:



This will set to tag to never take action. 

So, next are the specific folder actions.  The Sent Items, delete after 30 days for example.  The different here, is that we change the Tag Type to be Sent Items



And for the last step, which is the if another policy doesn’t apply and the emails older than ninety days, move it to archive:


Here, we change the Tag Type to All Other Folders in the Mailbox.

Something to note, there can only ever be 1 specific folder tag’s within a particular policy.  In the next step, we will create our policy and assign it to the users.  We can only include one tag per specific folder.  Meaning if we had two tags that targeted sent items, we cannot include them in the same policy.

So, lets create the policy.  Navigate to Organization Configuration –> Mailbox –> Retention Policies

Create a New Retention Policy, and give it a descriptive name.  Add the tags we just created:


On the next screen, you can select mailbox’s to assign this policy to:


Then create the policy.

We can also assign a policy specifically to a user by going to the Mailbox –> Properties->Mailbox Settings->Message Records Management, and selecting and applying a Retention Policy:


So then you can wait for the Exchange Server to apply the policies.  Remember, Exchange 2010 does it on a work cycle base.  This means Exchange is told to complete a the task of tagging and moving to archive at least x times in y days.  You can check your server by running the command:

Get-MailboxServer –Identity SERVERNAME | Select *ManagedFolderWork*


This should get you a completed run, at least once per day.  You can also run it manually yourself against the mailbox by running the command Start-ManagedFolder usersaccount:


Note, that it can take more than one run for this to work, as it needs to go through first, tag the items, and then the second run will take action on those items.  Now lets look at what the client sees.  Keep in mind you can see it both from Outlook 2010 and later and OWA:

In Outlook, if the user right clicks on a folder and goes to the policy tab.  Here the user will see two drop downs, one for Retention and one for Online Archive:


The default policies for say sent items and all items move to archive over ninety days, the user will never see.  They will only see Personal Tags.  So let’s say I want to set this folder to Never Archive


I change the Online Archive policy to Never.  If I want the policy to delete everything in the folder and subfolders after one week, I change the Retention Policy to be One Week Delete:


Look for my Exchange 2013 one, hopefully in a shorter time frame than it took for Part 2!