Tag Archives: disaster recovery

Witness Server Boot Time, GetDagNetworkConfig and the pain of Exchange 2010 DR Tests

Exchange 2010, High Availability

 

So we recently had a client who wanted to perform a DR test of their Exchange 2010 DAG.  The DAG consisted of a single, all in one server in production, and a single all in one server in DR.  The procedure for this test was to disconnect all network connectivity between prod and DR, shutdown the exchange server and the domain controller, snapshot them, and then start them back up.

Now, we can all agree that snapshots and domain controllers are inherently dangerous, so its up to you to ensure that you have your ducks in a row to ensure that this doesn’t replicate back to production.  That discussion is outside this article.

Now, initially they had trouble bringing up the databases in DR, as well as many components of the DAG.  This article will walk through an example, and try to make sense of what’s causing these issues.

So, here is our setup, we have a two node DAG cluster, stretched across two sites. 

Production

PHDC-SOAEXC01 – Prod all in one Exchange Server

PROD-DC01 – Prod domain controller

PHDC-SOADC01 – Primary witness server

DR

SFDC-SOAEXC01 – DR all in one Exchange Server

DR-DC01 – DR domain controller

SFDC-SOADC01 – Alternate witness server

The DAG name is SOA-DAG-01 and the Active Directory Sites are:

Prod = PH

DR = SF

So in our scenario, we shutdown both PHDC-SOAEXC01 and PHDC-SOADC01.  This will cause the databases in DR to dismount because quorum has been lost by the DR server.

Now, in a DR “test”, we would shutdown the DR exchange server, and the DR domain controller to snapshot them.  I just want to warn you, DO NOT EVER roll a domain controller back to a snapshot in a production environment.  This is a purely hypothetical setup.  Rant over.

Now, in our case, we have snapshotted and rebooted DR-DC01 and SFDC-SOAEXC01.  When we open the Exchange Management Console, we see that the DR servers databases is in a failed state:

image

Now, lets start running through the DR activation steps.  Here is what the process should normally be:

  1. Stop the mailbox servers in the prod site
  2. Stop the cluster service on all mailbox servers in the DR site
  3. Restore the mailbox servers in the DR site, evicting the prod servers from the cluster

After step 3, the database’s should mount, but as you will see, they wont, and I’ll try to explain why.

So, step 1, lets mark the prod servers as down:

   1: Stop-DatabaseAvailabilityGroup SOA-DAG-01 -ActiveDirectorySite PH -ConfigurationOnly

You should expect to see some errors, this is completed expected because the prod site is unable, hence the –configurationonly option:

image

Now, step 2, we will stop the clustering service on SFDC-SOAEXC01 with the powershell command:

   1: Stop-Service ClusSvc

Now, step 3, we will restore the dag with just the servers in DR:

   1: Restore-DatabaseAvailabilityGroup SOA-DAG-01 -ActiveDirectorySite SF

You may get an error stating

Server ‘PHDC-SOAEXC01’ in database availability group ‘SOA-DAG-01’ is marked to be stopped, but couldn’t be removed fro

m the cluster. Error: A server-side database availability group administrative operation failed. Error: The operation f

ailed. CreateCluster errors may result from incorrectly configured static addresses. Error: An error occurred while attempting a cluster operation. Error: Cluster API ‘"EvictClusterNodeEx(‘PHDC-SOAEXC01.SOA.corp’) failed with 0x46.

Simply re-run the command again and it should complete:

image

So now, we should have the databases mounted, and we should be able to see the prod servers as stopped by running the following command:

Get-DatabaseAvailabilityGroup -Status | FL

But, behold, we get an error stating GetDagNetworkConfig failed on the server.  Error: the NetworkManager has not yet been initialized

image

So, here is the first road block, what happened is that since the DR server is one node, it uses the boot time of the alternate file share witness to determine if it is allowed to form quorum.  This is due to a one node cluster, always having cluster, and it trying to prevent split brain.  Tim McMichael does a great job of explaining it Tim McMichael Blog Post.  Essentially the boot time is stored in the registry of the Exchange Server under:

HKEY_LOCAL_MACHINESoftwareMicrosoftExchangeServerv14ReplayParameters

The Exchange Server checks if it was rebooted more recently than the AFSW, it will not form quorum.  So how do we fix?  We can start by rebooting the AFSW to see what behavior changes.

After we do so, we can re-run:

Get-DatabaseAvailabilityGroup -Status | FL

Now, we get the network and stopped servers info, but there are some entries that are in a broken state, and we get the message that the DAG witness is in a failed state:

image

Note the WitnessServerinUse field reports InvalidConfiguration

We have to re-run our Restore-DatabaseAvailabilityGroup command to resolve this:

Restore-DatabaseAvailabilityGroup SOA-DAG-01 -ActiveDirectorySite SF

Now if we re-run Get-DatabaseAvailabilityGroup –Status | FL we get an expected output:

image

Now, we see that the WitnessShareInUse is set to the alternate.

So, are the databases mounted!? If we check, they are no longer failed, but are “Disconnected and Resyncing”

image

We need to force the server in DR to start because of the single node quorum issue.  This can be done with the following command:

Start-DatabaseAvailabilityGroup SOA-DAG-01 -ActiveDirectorySite SF

Now the database is mounted:

image

So, you can see, the testing can affect what occurs with the DR test, but also the setup with the single node cluster can cause this issue.  The boot time of the alternate file share witness is also extremely important to what the node can do when it restarts.

Hopefully you find the info useful!  Happy Holidays to all!

Exchange 2010 Database’s Fail to Replicate or Seed

Exchange 2010, High Availability

 

Recently had a colleague that ran into an issue with an Exchange 2010 migration.  He could fail over the mailbox databases with no issue to DR, but that’s where the trouble started.

The production database would start to report that there was a high copy queue length that would increase as more activity occurred on the DB.  The production database pure and simple was not receiving the transaction logs from the newly activated database in DR. 

The setup was simple, two nodes, one in production, one in DR with a FSW.  The nodes were all-in-one 2010 boxes, with one NIC for MAPI and one NIC for replication.

My colleague also informed me that he had some trouble initially seeding the database.  All roads pointed to an issue with replication.  We quickly checked his replication network settings and found the following setup:

image

His DAG network had both replication networks for the separate sites under one object.  Once we moved them to their own separate networks:

image

Everything went back to normal!

Till the next time!

How to Manage a Datacenter Failure or Disaster Recovery Scenario in Exchange 2010 – Part 3

Exchange 2010, High Availability

In Part 3 of this series, we are going to discuss how to failback to your primary site, after whatever condition that caused it to go offline in the first place occurred.

So when we last left off, we have activated all of our databases in our disaster recovery site. I showed you how had to stop the mailbox servers in the primary active directory site, which then allowed us to activate our disaster recovery site.

So now, let’s say the flood that occurred in the New York site has been fixed, all the water’s been removed, and thankfully, it did not damage our equipment. We are ready to move everything back to NYC. We begin by powering on the servers in NY. Since our DAG is in DAC mode, the NY servers do not try to mount their copies of the databases. Instead, the begin copying over any log files so as to bring their copies of the databases up to date with the DR site. Wait until all the database’s are reporting a status of healthy:

Now, remember, we told the DAG that the NY servers were down with the Stop-DatabaseAvailabilityGroup command from Part 2:

Notice how NYMB01 and NYMB02 are both listed as Stopped Mailbox Servers. You will get this error in the console if you try to activate the database on one of those two servers:

What we have to do is start these servers in the DAG, making them viable copies for activation again. We do this with the Start-DatabaseAvailabilityGroup command. Again, we could use with the –MailboxServer command and specify each server, separating them with a comma, or we can specify the whole site with the –ActiveDirectorySite option as below:

Start-DatabaseAvailabilityGroup DAG1 –ActiveDirectorySite NYC

Now if we run the Get-DatabaseAvailabilityGroup command, all servers are listed as started

Now, Microsoft recommends dismounting the databases that are going to be moved back to the primary datacenter site. This will mean that you will need to get a maintenance window to perform this action, as the databases will be offline. Keep in mind that you do not NEED to dismount the databases; it is just recommended by Microsoft.

Now, we can activate the Database. You can do this either by using the shell with the Move-ActiveMailboxDatabase MDB01 –ActivateOnServer NYMB02 command:

Or through the EMC with the Activate a Database Copy Wizard:

Now, the database should be activated. One issue that I have seen pop up is that the Catalog Index is corrupted. This is the index for the Full Text Index Search. If this is the case, you may have to reseed the Catalog Index with the following command:

Update-MailboxDatabaseCopy MDB02NYMB02 –SourceServer DRMB01 –CatalogOnly

This command will reseed the content index from the server DRMB01. You should now be able to activate the database copy.

Continue by moving all active database copies to their respective servers. If you dismounted the databases as stated above, now mount these databases.

Now, our clients are still connected, but they are still connecting through DRHT01.nygiants.com because it is set as the RpcClientAccessServer:

17-Dec09 15.08

We simply need to run the command Get-MailboxDatabase | Set-MailboxDatabase –RpcClientAccessServer NYHT01.nygiants.com to change this setting over:

If we check the properties of the databases, we see that NYHT01.nygiants.com is again set as the Rpc Client Access server:

Now, our Outlook clients will be connecting through NYHT01 for their access:

Well, that’s it!

In this three part series, I should you how to activate the backup datacenter, in the event that the primary datacenter became unavailable. We discussed the theory behind the process, mainly Datacenter Activation Coordinator, the actual work needed to do it, as well as how to failback when the primary site came back online.

If you have any questions, feel free to email me at ponzekap2 at gmail dot com.

How to Manage a Datacenter Failure or Disaster Recovery Scenario in Exchange 2010 – Part 2

Exchange 2010, High Availability

 

In the first article of this series, we went over some of the premises of Exchange 2010, Database Availability Groups or DAG’s, and Database Activation Coordinator.  We discussed our test environment, as well as how, in theory, an Exchange 2010 DAG handles the failure of a datacenter.

In this article, I’ll show you how to actual activate your disaster recovery site, should your primary site go down.

One of the first thing’s we need to take into consideration is how the clients, most importantly Outlook, connect to the DAG.  We spoke in the first article how there is a new service in Exchange 2010 Client Access Servers, called the Microsoft Exchange RPC Client Access Service.  Outlook clients now connect to the Client Access Server, and the Client Access Server connects to the Mailbox Server.  This means Outlook clients don’t connect directly to the Mailbox Server’s anymore.  This becomes significant in a disaster recovery situation.  

Lets look at the output of the command:

Get-MailboxDatabase | Select Name,*rpc*

09-Dec02 18.20 

The RpcClientAccessServer value on a particular Mailbox Database indicates that connections to this database, are passed through to the Client Access Server listed.  So, for all these databases above, all Outlook connections have to go through NYHT01.nygiants.com.  (If you have more than one Client Access Server per site, which you should for redundancy and load balancing, you can create a cluster using either Windows Network Load Balancing, or a hardware load balancer, and change this value to point to the cluster host name).  Lets look at our Outlook Client, and we notice that all connections are being passed to NYHT01.nygiants.com:

09-Dec03 18.23

Alright, now, we are ready to start failing over servers!  My test user, Paul Ponzeka, is located on a mailbox database named MDB02 which is running on server NYMB02:

09-Dec04 18.27

So, to simulate a datacenter failure, we’ll just pull the power on ALL NY servers:

NYDC01

NYHT01

NYMB01

NYMB02

NY-XP1 (client machine)

So now, all of our NY machines are off, lets check the status of the Databases on our DR servers located in Boston:

09-Dec05 18.40

Well, that’s not good huh?  Notice how the DB’s for the two copies in NY are listed as ServiceDown under copy status?  Also note that DRMB01, the Mailbox Server in Boston’s copy of the database is Disconnected and Healthy.  The reason the DB is not mounted, is because of the DAC mode enabled on the DAG, which we discussed in Part 1 of this series. DRMB01 dismounts ALL its Mailbox Databases because there are not a majority number of DAG members available, this it can’t make quorum. It dismounts to prevent a possible split brain scenario, but in this case, we REALLY need to get this activated.  How do we do this?  What we need to do is remove the NY server members as being active participating members of the DAG.  We do this through the Exchange Shell.

If we note the current status of the DAG with the command:

Get-DatabaseAvailabilityGroup | select Name,*server*

09-Dec06 18.47

Notice that the Servers value lists DRMB01, NYMB01 and NYMB02 has servers in the DAG, and lists them again as StartedMailboxServers?  Well, we need to tell the DAG, that NYMB01 and NYMB02 are no longer “started” or operational.  We need to use the following command for that:

Stop-DatabaseAvailabilityGroup.

Our command can specify each server that’s down, one by one:

Stop-DatabaseAvailabilityGroup –Identity DAG1 –MailboxServer NYMB01,NYMB02

Or, since we lost the entire NY site, we can specify that the entire NY site:

Stop-DatabaseAvailabilityGroup –Identity DAG1 – ActiveDirectorySite NYC

Now, since our Mailbox Servers in NY are actually unreachable, we want to specify the –ConfigurationOnly option at the end of this command.  Otherwise, the command attempts to actually stop the mailbox services on every mailbox server in the NYC site, and causes the command to take an extremely long time to complete:

09-Dec07 18.55

Now, if we re-run the command:

Get-DatabaseAvailabilityGroup | select Name,*server*

09-Dec08 18.56

We notice that the NY mailbox servers, NYMB01 and NYMB02 are both now listed as Stopped Mailbox Servers.

Next, ensure that the clustering service is stopped on DRMB01:

09-Dec09 18.57

Now, we want to tell the DR site, it should restart with the new settings (both the NY servers missing):

Restore-DatabaseAvailabilityGroup –Identity DAG1 –ActiveDirectorySite DR

09-Dec10 18.59

You will see a progress bar, indicating its adjusting Quorum and the cluster for the new settings.  Don’t be alarmed if you see an error regarding the command not being able to contact the downed mailbox servers.

If we return to the Exchange Management Console, all our databases in the DAG have been mounted on DRMB01!

09-Dec11 19.01

Great right?  But our clients are still having trouble connecting.  What’s the problem?

09-Dec12 19.02

The reason is that NYHT01.nygiants.com is still listed as the RPC Client Access Server for this database:

09-Dec13 19.05

We have two choices.  First, is to change the DNS record for NYHT01.nygiants.com to be a CNAME for DRHT01.nygiants.com.  Or the second, and faster method, is changing the RPC Client Access Server to be DRHT01.nygiants.com with the following command:

Set-MailboxDatabase MDB02 –RpcClientAccessServer DRHT01.nygiants.com

09-Dec14 19.07

To do it for every DB that you failed over is simple:

10-Dec01 08.54

Now, back to the Outlook client and WHOILA!

09-Dec17 19.11

Now all your messaging services are back and running in your Disaster Recovery site, with limited downtime for your end users.

In this article we discussed how to fail over a datacenter to your backup or disaster recovery datacenter, should your primary go offline.

In the next and final part of this series, I’ll show you have to fail back to the primary datacenter site, which some admins think is even more terrifying than failing over!

Stay Tuned!

How to Manage a Datacenter Failure or Disaster Recovery Scenario in Exchange 2010 – Part 1

Exchange 2010, High Availability

 

Exchange 2010 introduced several high availability and disaster recovery features, the one that receives the most publicity is the Database Availability Group (or DAG for short) feature.  In short a DAG allows replication of a mailbox, to other servers in the DAG, that can be activated automatically within 30 seconds, restoring user access to their mailbox’s.  For more information see my article series on DAG’s here.

The automatic failover is great for High Availability within a datacenter, or even across a datacenter.  For instance, consider the following diagram:

DB

Here, the green copies are the Active Copy, they are the one’s users are actually accessing for their mailbox’s.  The yellow and the red are copies that can be activated, should the Active Copy go offline.  Consider the possibility that MDB01 on server NYMB01 goes offline, the copy on NYMB02 would be activated within 30 seconds automatically.  Next, the drive holding the database MDB01 on server NYMB02 fails, causing THIS copy to go online.  In this case, the copy of MDB01 on DRMB01 in Boston would be activated with 30 seconds, and users would be able to access their mailbox’s, across the WAN link to Boston!  This is all part of the design of the DAG, and is great from a High Availability standpoint. 

But, as we know, High Availability and Disaster Recovery are COMPLETELY separate.  High Availability means to provide your users with high uptime, or access to the application.  Disaster Recovery is the ability of the application to function when a catastrophic event happens, such as destruction of a datacenter or worse, the building holding the datacenter.  This last part, is what we will cover in these articles.  To do so, there is a feature of DAG’s that we need to talk about, and that is the Datacenter Activation Coordinator, or DAC.

DAC is a setting on a DAG that has three or more member mailbox servers, that are extended to multiple sites.  So, a higher level view of our Exchange environment is below:

How to Manage a Datacenter Failure 2010

NYMB01, NYMB02 and DRMB01 are all part of the same DAG, lets call it DAG1, and all these servers are located in the NYGIANTS.COM domain.

Now, our DAG fits the criteria for DAC mode, which is three or more member servers, spread across multiple Active Directory Sites.  So, now what is DAC mode?

DAC mode is, quite simply, a mechanism to prevent the possibility of a split brain in your exchange environment.  Consider the following scenario.  As per the first diagram, you have MDB01 and MDB02, and both active copies run in NY on NYMB01 and NYMB02 respectively.  NYHT01 is running the file share witness.  A file share witness is a server that only participates if the DAG has an even number of servers, it’s used to “break” any tie in voting regarding if a server is down or not.  The NY site is connected via WAN connection to Boston, where DRMB01 hosts replica’s of MDB01 and MDB02.  Say there is a cut to the WAN connection, and for whatever reason, NY and Boston can no longer communicate, but neither side is truly offline.  The Boston side, since it can no longer connect to the NY server’s, assumes they are down, and mounts the database copies it has of MDB01 and MDB02, and marks them as active.  Since NY is still operational, it STILL has its copies of MDB01 and MDB02 mounted and active.  This is a split brain scenario, both sites believe that they are the rightful owner of the database, and have thus mounted their respective DB’s.  This would cause a divergence in data.  For example, if outside user, sends an email to a user at nygiants.com, and its received in NY, it would get delivered to his mailbox in NY.  If another user sends the same user at nygiants.com an email, and it gets received by Boston, it would get delivered to that’s users mailbox in Boston.  Each mailbox is different, which is a huge problem, this is the issue with a split brain scenario, and is what DAC was built to protect against. 

DAC does this by preventing the DR servers from mounting their databases.  DAC requires that a majority set of the DAG members be available for the DAG to be able to make an operational decision, in this case the DR servers mounting their database.  A DAG that has the majority of its member servers is said to have Quorum.  So, in our previous example where the line was severed, DR would NOT mount its database’s.  Why not?  Because the DAG consisted of 3 total members, NYMB01, NYMB02 and DRMB01.  What this means is that according to DRMB01, its the only surviving server, which is 1 out of 3, and is not a majority, hence it cannot mount its database. Now, if you look at the first diagram, you will notice that MDB03, is green on DRMB01, meaning that the active copy of MDB03 is running on DRMB01.  Well, what happens in this scenario, where the WAN connection was cut?  Wont one of the NY servers mount MDB03?  Since DRMB01 has MDB03 already mounted, wont this cause the EXACT split brain scenario we are trying to avoid?  No.  Why not?  Remember how I said that the DAG needs to be able to make Quorum?  Well, in this case, since DRMB01 cannot make Quorum, it is forced to dismount any database that it has running.  In the event log, you’ll see the following message:

02-Dec10 19.43

So, DRMB01 dismounts MDB03, which is mounted and activated in NY.  This is how the split brain scenario is avoided. 

So what does this mean if there really is a need for a datacenter failover?  At one site I work at, there was a broken pipe in the tenant above them, causing a flood that threatened to destroy their datacenter.  If the datacenter had been destroyed, how do we activate DR?  We’ll go over that in Part 2 of this series.

For this article, we discussed mainly the theory and thought process behind DAG’s, Datacenter Activation Coordinator, and the concept of Quorum with regards to the cluster. In the next article, we’ll jump in and do an actual datacenter failover.