Following on from setting up the iSCSI connectivity i needed to create some CIFS shares to replace a windows 2003 file server. Connectivity from the VNXe to the switch infrastructure was very simple due to how the VNXe fails over and fails back.
We currently have a pair of Cisco 3750's in a stack for the core connectivty of the data network so it was very simple to take a single RJ45 from SPA into one of the 3750's and a single RJ45 cable from SPB into the other 3750. Again as described in part 1, if the VNXe detects a cable fault or switch failure it willl fail over only the services on that network port affected to the other controler and back again once connectivty is resumed.
To increase network performance the CIFS service was configure on SPB as the primary connection ensure that both controlers were being used in an active / active senario (SPA is being used for iSCSI - from part 1).
That pretty much sums it up. Any questions or comments please feel free to ask and i'll add them to my blog.
Thanks for reading
Andy
Search This Blog
Friday, 22 July 2011
Thursday, 7 July 2011
Installation of New EMC VNXe 3300
We required a SAN for a company which had the resilience, reliability and performance at a reasonable price point to hold a server estate of around 30 servers including the usual Microsoft suite of Domain controllers, SQL and Exchange which would all be virtualised using VMware along with approximately 1TB of file shares. I Bench marked HP P4000 (left hand), Netapp’s FAS 2040 and the new EMC VNXe 3300 and the combination of great functionality, performance and a really great price point I opted for the EMC VNXe 3300.
The requirement was for 99.99% availability of the hardware so the connectivity between SAN and VMware estate were to be protect by a pair of Cisco 2960G’s powered via twin APC ups and a power transfer switch. This would protect the environment from power, network, server or cable failure.
As the EMC VNXe3300 is so new, the documentation that came with it, while simple to read didn’t cover a more complex environment, only a simple installation into a single switch so after several test scenarios and a learning curve on how the VNXe fails over and fails back (which is the key to the solution described below) I have devised the following example on how to configure a VNXe 3300 for a production environment.
How does the VNXe fail over? The VNXe 3300 is very clever on how it detects and fails over services between it processors (even better than it was sold to me). You set up a service on one or a team of network ports and should complete loss of connectivity occur on those network ports, it will fail over the server associated with those ports to the other processor module. For example, if you set up iSCSI on port 2 on SPA and CIFS on port 3 on SPA, if you pull the network cable out of Port 2 it will fail over iSCSI to SPB but keep the CIFS on SPA (very clever indeed). Once the cable is plugged back into SPA the iSCSI service then fails back over to SPA.
So the configuration I have used and tested is as follows. Two cables teamed together from SPA ports 2 and 3 go into your 1st switch and two cables teamed together from SPB ports 2 and 3 go into your 2nd switch. On the Cisco switch a LACP trunk is created for each of these connections. I then created a LACP trunk of two ports to connect the two switches together. The VMware host are connected to the switches via a single network cable to Switch 1 and a single network cable to switch 2. This configuration will allow for any 1 item to fail and connectivity from VMware hosts to SAN will remain operation.
The diagram below shows in more detail how this should be connected.
Subscribe to:
Posts (Atom)