Binsfeldius Cluster, creating the three-node-cluster

This entry is part 12 of 20 in the series Binsfeldius Cluster (Original)

Let’s get this cluster started 🙂

Adding the Hyper-V Cluster Storage

On each node, start the iscsicpl and connect to the QNAP.If this is the first time iscsicpl is started you see the screen on the right.

Select the iSCSI LUN called HyperStorage and connect. Add to favorite targets will ensure it is automatically connected each time the node restarts.

Use MMC, Disk Management on one of the nodes to initialize the HyperStorage. This only needs to be done on one node. Create a Simple Volume and do NOT assign it a drive letter. Name the disk according to your storage design, in my case HyperStorage.

Check the other hosts to verify that this disk is present. Windows has a “shared nothing” rule when it comes to storage. Only one server can bring a disk online at a time, even if more than one server can see the disk.

We’ll be using Cluster Shared Volume (CSV). This allows many virtual machines, running across the three nodes in the cluster, to reside on a single shared volume. They can Live Migrate between nodes, independent of each other. Despite this sharing, there is still the rule of shared nothing with the CSV volume. One node, the CSV coordinator (a role that can failover), owns the volume. It delegates read/write permissions to VM files to those nodes that are hosting the VMs.

It’s clusterrrrrr time

Two basic ingredients are needed:

  • The name of the cluster: Binsfeldius
  • The IP address of the cluster: 172.16.111.16

On each node, in the sconfig menu select option (11) and enable the failover clustering feature. In the third screen you see that this feature is now enabled.

On the management server (DC1)

  • Go to Control Panel -> Programs -> Turn Windows features on/off
  • Under Remote Server Administration Tools -> Feature Administration Tools
  • Put a tickmark in the Failover Clustering Tools box

Open the MMC, add the Failover Cluster Manager snapin (not the Failover Cluster Manager Host!).

Add the clusternodes to the list

Run the validation tests, the Validate a Configuration Wizard opens.

Run all the tests

Confirmation screen

The tests are running, this may take some time … get yourself a nice hot Nespresso

WTF?!, failed? on validate service pack levels …. oh men, yes that’s what you get when you go out for groceries during a cluster build, the third node is not on the same update level. Well, let’s run an update on all nodes

Back, and now it passes all the tests

Follow the wizard and provide the clustername & cluster IP address. Let it run its course 🙂

wh00h00, proud owner of a three-node-cluster!

Cluster types

There are basically four options for making a failover cluster. Three of which use majority(votes) and one uses a quorum. In my case, as I have an odd number of nodes, I’ll use the Node Majority type.

Keep in mind that if you add or expand the cluster with more nodes you’d either need two more nodes or one node and change the cluster type to use a quorum disk!

Majority Node Cluster works on votes and enough votes are required to have cluster up and running.

Node majority: Only nodes can vote and more then half of the votes are required for qourum to be maintained. So with two out of three nodes the cluster is considered up which is perfect as I’ve sized for n+1.

Next is tidying up a bit…

Series Navigation<< Binsfeldius Cluster, Getting the DC1 onlineBinsfeldius Cluster, finishing the cluster configuration >>
This entry was posted in Builds and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.