Hyper-V Powershell module not installed during Bare Metal Deployment



Achieving over 1-Million IOPS from Hyper-V VMs in a Scale-Out File Server Cluster using Windows Server 2012 R2


Hi,

I'd like to point you to a Whitepaper from your Development Group that impressively shows how to build High Performance Hyper-V Clusters today.

 

https://www.microsoft.com/en-ie/download/details.aspx?id=42960

 

Cheers

Robert


After installing Hyper-V Integration Services on the next reboot the VM displays BSOD 0x0000007B


Hi,

Recently, I had some customers with VMs they just P2V'ed, or even had them running already on Virtual Server or Hyper-V. Now, they installed the latest Integration Components that came with the R2 Release.

After the required reboot, the VM shows a Bluescreen 0x0000007B INACCESSIBLE_BOOT_DEVICE

 

During Debugging I found that the Storage Driver of the ICs require the Windows Driver Framework (WDF), which was not loaded in this case. So the Storage Driver fails to load.

Looking into the Registry of the VM, shows that the WDF Driver was already installed previously, but had the wrong Group relationship, so it is initialized too late.

So here's how to fix this issue:

1. Boot the VM into LastKnownGood. Press F8 during boot.

2. Open the Registry and drill down to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wdf01000

3. There is a Group Value that should have the Value WdfLoadGroup. In my cases it was wrongly set to base. Change this to WdfLoadGroup

4. Then remove the Integration Components from Control Panel/Software.

5. Reboot the VM (now without ICs)

6. Install the ICs once again

Cheers

Robert


Availability of Hotfix Rollup package for System Center Virtual Machine Manager 2008 R2


Update: Meanwhile there is a further Februar Rollup available:

Description of the System Center Virtual Machine Manager 2008 R2 hotfix rollup package: February 9, 2010

https://support.microsoft.com/default.aspx?scid=kb;EN-US;978560

Please note that when you installed the Rollup, you need to allow the Host agents to be updated, as described here:

https://blogs.technet.com/mbriggs/archive/2010/02/25/host-in-needs-attention-state-after-installation-of-kb978560.aspx

/Update End

 

 

Recently we published a Rollup Fix for System Center Virtual Machine Manager 2008 R2, and an Update for the Management Console on Microsoft Update.

Description of the System Center Virtual Machine Manager 2008 R2 hotfix rollup package: November 10, 2009

https://support.microsoft.com/kb/976244

 

When you remove a virtual hard disk from a virtual machine in System Center Virtual Machine Manager 2008 R2, the .vhd file on the Hyper-V server is deleted without warning

https://support.microsoft.com/kb/976246

 

 

 

When your SCVMM Server or Admin Client is not directly connected to the Internet you may have the need to download the package from another machine

You can get the fixes from https://catalog.update.microsoft.com and specifying Article 976244 and/or 976246.

You'll get .cab files containing .msp files which you can install from an elevated Command Prompt.

To install the Server rollup specify:

msiexec /p vmmServer64update.msp BOOTSTRAPPED=1

To install the Client rollup specify:

msiexec /update vmmClient64Update.mspBOOTSTRAPPED=1

or, if your Admin Machine is 32bit :

msiexec /update vmmClient32Update.mspBOOTSTRAPPED=1

Cheers

Robert


Availability of Hotifx Rollup package for System Center Virtual Machine Manager 2008


Recently we published a Rollup for System Center Virtual Machine Manager 2008 on Microsoft Update.

https://support.microsoft.com/kb/961983/en-us

When your SCVMM Server is not directly connected to the Internet you may have the need to download the package from another machine

You can get the fix from https://catalog.update.microsoft.com and specifing Article 961983.

You receive a package containing a file vmmServer64update.msp

Transfer this file to the target SCVMM server and install it with the following command:

msiexec /p vmmServer64update.msp BOOTSTRAPPED=1

Cheers,
Robert


BITS Compact Server cannot process the request for the following URL in the %1 URL group: %2. The request failed in step %3 with the following error: %4.


Hi,

Windows Server 2008 R2 has a new feature component: Bits Compact Server! This is a lightweight Server for Background Intelligent Transfer Service.

SCVMM 2008 R2 makes use of it and enables this feature on the Hyper-V Hosts during Agent installation. 

Recently, I had a customer where all Bits Transfers (creatíng, migration, etc) to one particular Host where failing.

ERROR CODE:    0x80072efe
ERROR CONTEXT: 0x00000005 

The error in Eventlog is:

Following warning is logged in BITS analytic log:
Log Name:      Microsoft-Windows-Bits-CompactServer/Analytic
Source:        Microsoft-Windows-Bits-CompactServer
Date:          16.09.2009 0:49:22
Event ID:      50

 As it turned out, removing the host (in this case all cluster nodes) from SCVMM, deleting the Certificates and adding the hosts back to SCVMM resolved the issue.

These steps are also described in here: https://blogs.technet.com/scvmm/archive/2009/07/20/vm-creation-may-fail-and-stall-with-copying-0kb-of-16gb.aspx

 

Cheers

Robert


Building a teamed virtual switch for Hyper-V from SCVMM


Hi!

A common question today is how to setup your cluster blades with (10GB) Ethernet connections. How do I configure that correctly with the following requirements:

  • A LBFO (Load Balanced FailOver) Team of the available physical Connections.
  • Several Networks which are controlled by QoS to not use up the entire channel.

Management Network

Cluster Network

iSCSI Network

Live Migration Network

(Optional) multiple isolated virtual networks

 

Let's see how this can be build using SCVMM 2012 R2.

I start with 2 Nodes. Each has 3 Ethernet interfaces. Ethernet1 is already used by a virtual switch (NodeSwitch). The Switch used Ethernet1 as a team with one NIC.
A virtual NIC vEthernet(Management) is exposed to give the Host Network connectivity


 
This Setup makes it easier to build the desired switch with the 2 remaining NICs, as I will always have connectivity during the creation of the new switch. Later, I can easily destroy the NodeSwitch and add Ethernet1 to the newly build Switch as 3rd NIC.

Let's start building:

1. Define the Logical Networks the new Switch should connect to.

The Wizard allows you to already create an identical VM Network right here. As you always need a VM Network mapped to one logical Network let's use this.

First creating the "Management" Logical Network:

 

Add a Network site for this Logical Network. Here is the IP Address range for my management purposes. It matches my DHCP Scope, so I do not specify that again here.

Do the same steps to create the other Networks. To allow separation and QoS of those connections you have to work with own subnets on top of the existing network.

I use:

Tenant (172.31.10/24), Cluster (172.31.2.0/24),  iSCSI (192.168.1.0/24) , Live Migration (172.31.3.0/24)

When adding the Network Sites for those specify the IP Subnet.

 

 

For the Tenant Network, specify to use Network virtualization and don't create the VM network yet.

As it uses Network virtualization we can connect several isolated VM Networks later (Red and Blue)

 

That results in the following logical Networks

And corresponding VM Networks:

Back on logical Networks, create IP Pools on each Network so that VMM can give that out statically later. (I just accepted the defaults, after giving it a corresponding name…)

Just as Management Network is using DHCP in my case, no need for an IP Pool.

Now as we have created these Networks, we can start building the Switch Template using those.

(You can later create new Networks, and associate it with the Template, if you have to.)

 

2. Define the Uplink Port Profile for your new switch.

We need to specify which networks are reachable by the switch.

  

As I used the switch for VM's mainly, I use Hyper-V Port balancing.

Next, add all Network Sites this switch will be connected to, and enable Network virtualization too

 NodeLN_0 belongs to my other Switch. So not connecting it here.

 

3. Create new Virtual Switch Template (Name ClusterNodeSwitch)

Here the important part is to select Uplink mode, and Add the previously created Uplink Port Profile

 

Next, we need to specify which Virtual Ports with which qualification should be available when later placing the switch. You could basically include every type here, and pic later the ones you need.

 

I did this for the possible networks I created previously. The Default helps to have the field prepopulated later when I assign VM Networks to VMs.

 

 

 

 

4. Finally, placing the switch on the nodes.

 Here's one of my hosts properties before placing the switch: (as this was pre-existing, to how the differences later. There is no need to have this)

 

Start adding a New Logical Switch

 

Select the right Switch Template, and adding the physical NICs

 

 

Now, create the Virtual Network Adapters the Host should have available for it's use. Otherwise it would have no connectivity.

I chose not to inherit the settings, as I have a already existing management connection. This is required if you have no other connection yet.

For all Networks I selected to pick a Static IP from the suggested pool

 

 

Do the same on the other nodes….

So in NCPA on one node this now shows as:

 

Ipconfig:


 

All those Networks also show up in Failover Cluster Manager. (I've given them the correct names based on the used IP already.

 

Now, select the correct Network for LiveMigration:

 

 

5. (Optional): Create Isolated BLUE and RED VM Networks

To run virtual networks with identical subnets, isolated, you create VM Networks that map to the Tenant Logical Network.

Give it a subnet that might seem to collide with your on premises or other VM Networks, but it's isolated.

 

And we create an associated IP Pool, so that we can assign later IP addresses to the VM and they will receive them from a DHCP extension installed in every Hyper-V switch.

 

Doing the same for Blue, creates us two isolated VM networks.

 

 

Now, to see that in action, I assign  VMs to 3 different networks:

First, to my Management Network, it will get a DHCP address from there

 

Second, to the Blue Network, DHCP on the Hyper-V Switch will give out IP address here

 

Lastly, to the Red Network, DHCP on the Hyper-V Switch will give out IP address here of the Red scope:

 

I hope that helps you better understand and use this cool feature. It turns out handy when you configure more then just 2 nodes.

 

Cheers

Robert


Compliance Scan from SCVMM fails with Error (2927)


Hi,

 

recently, I had some occurrences of the following error while scanning the compliance state of particular hosts or clusters from SCVMM:

 

Error (2927)

A Hardware Management error has occurred trying to contact server <hostname>  .

WinRM: URL: [https://<hostname>:5985], Verb: [INVOKE], Method: [ScanForUpdates], Resource: [https://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/UpdateManagement]

 

Unknown error (0x80338043)

 

Recommended Action

Check that
WinRM is installed and running on server <hostname>. For more Information use the command "winrm helpmsg hresult" and https://support.microsoft.com/kb/2742275

 

This happens because the answer of the host with the installed Hotfixes grows larger and needs to be fragmented. Here WinRM Needs to be updatedt to a later version.

To solve this problem you need to install "Windows Management Framework (WMF) 5.0" at least on your SCVMM Server.

 

At the time of this writing the Production Preview of WMF 5.0 is available at https://www.microsoft.com/en-us/download/details.aspx?id=48729

 

Cheers

Robert