Binsfeldius Cluster, the chicken or the egg of Microsoft clustering

This entry is part 8 of 20 in the series Binsfeldius Cluster (Original)

Interesting stuff. To be able to fully utilize the Failover Clustering Feature of Microsoft, I run into an interesting quirk. While you can run a cluster based on a workgroup, you’d want to put it into a domain to get all those nice domain features. Let’s call this statement A.

I am virtualizing virtually (yes yes pun intended) everything, including the DC. Let’s call this statement B. Don’t worry, I’ll get to the Microsoft recommendations on virtualizing DC’s & LUN’s & CSV’s in a sec.

Here’s the thing: To be able to join the nodes to a domain, you of course need a Domain Controller on the cluster. However, there is no VM containing a domain controller as there is no cluster so I haven’t created the VM yet. Building a physical machine to put Domain Controller on defeats the purpose of me virtualizing everything, also would go against the low-power principle.

Microsoft recommends (link) , in a nutshell statement C & D:

  1. for clustered node to auto-start, authentication requests from the clustered node must be serviced by a DC in the cluster node’s domain.
  2. virtualized DC’s be placed on a non-CSV LUN (as a non-CSV LUN can be brought online without authentication)
  3. deploy at least 2 DC’s for a clustered environment on physical hosts

So:

A. Use the clusternodes in a domain
B. All virtualized machines, no physical servers (apart from the nodes ofc)
C. Redundancy of DC’s
D. Microsoft’s limitation on using DC on CSV. Use physical DC’s.

What I’ll do (and this is just the theory, I’ll be back to revise if incorrect)

I will create a non-CSV iSCSI LUN on my QNAP with the target name of ttgdc1 have each node initiate a connection to this target. Install a DC on a VM, call this DC1. The VM will be hosted on the ttgdc1 target and run on each node. The first node to start will grab a disklock on the VM on the target and able to start the DC. The other nodes will fail on this VM. This way, in case of a powerfailure, each node has the possibility to start the non-CSV DC and therefore the cluster can be started.

Note to self: clusternode start, then try delays on starting the services in the following order: attach iSCSI LUN for DC, start Hyper-V services, start DC VM, start Cluster services. The other two nodes will generate an error event as they can’t start the DC VM due to disklock.

Once the cluster is started, I’ll run a second DC, call this DC2, there on the CSV (eventhough it is not recommended due to CSV’s interruptable writing issues)

Apart from (D) using physical DC’s this about covers it all. Don’t think I’ll need a physical DC.

Hmm, just thinking of something:

My clusternodes are in the Physical:DMZ network with addresses 172.16.111.1/2/3. The virtual environment (Virtual:TTG LAN), containing DC2, is in the 11-network (172.16.11.x). For the cluster nodes to see the DC1, it needs to be in the same network so let’s say I’ll give the DC1 the IP of 172.16.111.5 … how are the DC1 and DC2 going to see eachother and sync ?
ah yes, through my yet-to-be-moved TMG and by using AD Sites & Services (post is ready). It is weird tho to have a DC on the “outside”.

Couldn’t resist showing the pretty little cluster for the first time…

Series Navigation<< Binsfeldius Cluster, installing Hyper-V Server 2008R2 to iSCSI boot LUNBinsfeldius Cluster, Storage design >>
This entry was posted in Builds and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.