Search This Blog

Monday, 5 December 2011

vCenter Appliance extra NIC issue

Currently I’m working on a customer install where we are using the new vsphere 5.0 vcenter server appliance. As we were migrating an existing 3.5 environment to a new platform, we had to keep adding and removing servers in and out of vCenter due to the limitations of the appliance.

On one occasion we had to remove the new vCenter appliance from the inventory of one server and add it to the inventory of another. When this happened, the appliance found added a new NIC (eth1) and would not function as it said eth0 was not connected and the appliance is only configured to used eth0.

If you ever get this error, there is a simple fix.

Log onto the vCenter appliance (via the console) and log in as root, for the vCenter appliance the username is root and the password VMware. Edit the file (using vi) /etc/udev/rules.d/70-net-persistent-names.rules.
This file shows all the network interfaces and their associated MAC addresses. Simply remove the entry for eth0 and edit the line for eth1, renaming it to eth0. Save the file with the command :wq and restart the server using shutdown –r now.

Monday, 14 November 2011

Backing up iSCSI targets

One thing you will more than likly need to day when implementing a SAN is backup the content of the iSCSI targets. If you implement something like backup exec on windows, then windows will try and Automount the volumes which will then cause you no end of problems.

To prevent this, before connecting to the iSCSI targets, run the following from the command prompt to prevent this:


diskpart
automount disable
automount scrub

Wednesday, 9 November 2011

Network IP Error after P2V

Hi All,

Just a quick update to my Blog. For those of you that have VMware and have run Converter on a physical server that has had a static IP address before the Converter was run, you may get an error message saying that the IP Address is already in use when you try and assign the new virtual network adapter with the old IP address.

I personally create a batchfile which looks like this

"set devmgr_show_nonpresent_devices=1
DEVMGMT.MSC"

I then run this on the new virtual machine. This will enable non present devices to be shown in device manager, and then it will open device manager. If you then click View and Show hidden devices, this will show all the old hardware that was present when it was a physical server. Simply right click the old network adapter and uninstall it and this will remove the offending static IP address.

Hope this helps.

Andy

Friday, 22 July 2011

Installation of New EMC VNXe 3300 - Part 2

Following on from setting up the iSCSI connectivity i needed to create some CIFS shares to replace a windows 2003 file server. Connectivity from the VNXe to the switch infrastructure was very simple due to how the VNXe fails over and fails back.

We currently have a pair of Cisco 3750's in a stack for the core connectivty of the data network so it was very simple to take a single RJ45 from SPA into one of the 3750's and a single RJ45 cable from SPB into the other 3750. Again as described in part 1, if the VNXe detects a cable fault or switch failure it willl fail over only the services on that network port affected to the other controler and back again once connectivty is resumed.

To increase network performance the CIFS service was configure on SPB as the primary connection ensure that both controlers were being used in an active / active senario (SPA is being used for iSCSI - from part 1).

That pretty much sums it up. Any questions or comments please feel free to ask and i'll add them to my blog.

Thanks for reading

Andy

Thursday, 7 July 2011

Installation of New EMC VNXe 3300

We required a SAN for a company which had the resilience, reliability and performance at a reasonable price point to hold a server estate of around 30 servers including the usual Microsoft suite of Domain controllers, SQL and Exchange which would all be virtualised using VMware along with approximately 1TB of file shares. I Bench marked HP P4000 (left hand), Netapp’s FAS 2040 and the new EMC VNXe 3300 and the combination of great functionality, performance and a really great price point I opted for the EMC VNXe 3300.

The requirement was for 99.99% availability of the hardware so the connectivity between SAN and VMware estate were to be protect by a pair of Cisco 2960G’s powered via twin APC ups and a power transfer switch. This would protect the environment from power, network, server or cable failure.
As the EMC VNXe3300 is so new, the documentation that came with it, while simple to read didn’t cover a more complex environment, only a simple installation into a single switch so after several test scenarios and a learning curve on how the VNXe fails over and fails back (which is the key to the solution described below) I have devised the following example on how to configure a VNXe 3300 for a production environment.

How does the VNXe fail over? The VNXe 3300 is very clever on how it detects and fails over services between it processors (even better than it was sold to me). You set up a service on one or a team of network ports and should complete loss of connectivity occur on those network ports, it will fail over the server associated with those ports to the other processor module. For example, if you set up iSCSI on port 2 on SPA and CIFS on port 3 on SPA, if you pull the network cable out of Port 2 it will fail over iSCSI to SPB but keep the CIFS on SPA (very clever indeed). Once the cable is plugged back into SPA the iSCSI service then fails back over to SPA.

So the configuration I have used and tested is as follows. Two cables teamed together from SPA ports 2 and 3 go into your 1st switch and two cables teamed together from SPB ports 2 and 3 go into your 2nd switch. On the Cisco switch a LACP trunk is created for each of these connections. I then created a LACP trunk of two ports to connect the two switches together. The VMware host are connected to the switches via a single network cable to Switch 1 and a single network cable to switch 2. This configuration will allow for any 1 item to fail and connectivity from VMware hosts to SAN will remain operation.

The diagram below shows in more detail how this should be connected.