Since last week I have seen a number of questions in and around VASA and how it is configured for EMC arrays. I got a couple while doing the Q&A for EMC’s recent VNX best practices with vSphere 5 live WebEx and the day after I was asked by Cormac Hogan over at VMware to take a look at a question asked on the VMware blog site. So I’ll admit now, I hadn’t really had a chance to look at VASA in-depth, shame on me! However I thought that this was as good a chance as any to learn and I thought I would do a post on how to configure it for both EMC’s VNX and VMAX systems. Big thank you to my EMC colleague Garrett Hartney for providing both his time and an environment that we could set this up in.
EMC’s VASA implementation
For those not familiar with VASA I strongly suggest reading this article to familiarise yourself with the what and why around this new VMware API. For those who just want the short version, VASA is essentially an API that allows storage vendors to publish array capabilities up to vCenter. This allows VMware admins to see characteristic information about the storage underpinning their datastores and also allows them to use VMware storage profiles to enforce VM storage placement policy compliance, e.g. This SQL VM will always sit on performance disks.
The table below shows how EMC currently publishes it’s array capabilities through VASA 1.0 up to vCenter.
An example of how this looks for a datastore when pulled through to vCenter can be seen in the screenshot below.
Core Components and architecture
Regardless of which array you are connecting to, EMC’s implementation of VASA is done using Solution Enabler and something known as the SMI-S provider. Together these two components act as a middle tier between vCenter and the different arrays being queried. It’s worth pointing out that the SE \ SMI-S server supports in-band (SAN attached) and out of band (network) connectivity for VNX and CLARiiON arrays and in-band connectivity only for Symmetrix arrays. The architecture of the setup is demonstrated in the diagram below.
SE / SMI-S server deployment
To get started with VASA you will need to download SMI-S version 4.3.1 which already comes pre-bundled with Solution Enabler 7.3.1. This software can be downloaded from the link below and comes with an option for 32 and 64 bit Windows as well as Linux. For full details on OS support see the release notes for SMI-S 4.3.1 – (PowerLink Account Required for downloads)
As part of my own deployment I am using a Windows 2008 R2 64 bit server to deploy the core components. The server has been built as standard with no special configuration required.
We first of all need to deploy Solution Enabler and SMI-S on your designated server, locate the installation media and run the install package.
When presented with the welcome screen click next.
Leave the install location as default and click next.
When prompted select the array provider only and click next.
Review the installation settings and space requirements and click next to install.
Once the install is complete, click finish.
Configure the environment variables on the server to include the SYMCLI path
Locate the following file and open it for editing
Locate the line below and change the value from 100 to 1200, save and exit the file.
Navigate to the services console and restart the ECOM service
navigate to the location shown below and run testsmiprovider.exe
the next step is to connect to the SMI-S provider. I used the defaults which are shown in the square brackets, just hit enter on each line to use the default.
Once connected you will see the following at the command prompt
type the dv command at the prompt to display version information about the SMI-S provider install. This basically proves that everything is working as expected
that concludes the basic installation and configuration of the SMI-S and Solution Enabler server, now all we need to do is add in the storage arrays we want displayed to vCenter via the VASA api.
CLARiiON and VNX
SUPPORTED - CLARiiON Navisphere Release 28, 29 & 30, VNX Unisphere Release 31
(SMI-S supports many earlier CLARiiON releases but vSphere 5 does not)
Earlier I mentioned that the CLARiiON and VNX arrays could be added to SMI-S in-band or out of band. The most common method and the one I intend to use here is to connect out of band, i.e. across the network. If you do want to connect in-band with direct SAN connection then check out page 39 of the SMI-S v4.3.1 release notes.
One major pre-requisite for connecting CLARiiON and VNX is that the user account used to connect to the arrays must be an administrator login with global scope. At this point you should hopefully still be connected to the testsmiprovider.exe application used earlier, if you are not then please repeat the command line steps shown above to reconnect.
Once connected successfully type the commands addsys to begin adding the array
Enter the IP address / DNS name for Storage Processor A and hit enter.
Enter the IP address / DNS name for Storage Processor B and hit enter.
You can continue to add additional arrays here or hit enter to move to the next step.
Accept the default for the address type, i.e. IP/Nodename by hitting enter.
Continue answering this question for each storage processor / array added
enter the global scope administration user account for connecting to the arrays.
enter the password for the administration account being used
You will then see the message +++++ EMCAddSystem ++++
After a while you will see the output from the addsys operation, as you can see below the output is 0 which indicates success. The details of the system added are then listed.
If you now run the dv command the arrays added will be listed as connected.
Within the storage provider screen click on the add button as shown below.
Enter a name for the provider and enter the URL shown below, the IP address of the server hosting SE / SMI-S should be entered where it has been blanked out below. The user name is admin and the password is #1Password.
When prompted accept the certificate for the SMI-S provider
Once successfully added you will see the provider displayed
Highlight the provider and you will see the array that was connected to the SMI-S provider server earlier.
To check that VASA is working correctly in vCenter click the VM Storage Profiles icon on the home screen within vCenter.
When setting up a new Storage Profile you should be able to see the storage capabilities presented to vCenter, these are shown below and are marked with system.
Job done, VASA successfully deployed and storage capabilities showing in vCenter!
DMX4, VMAX and VMAXe
SUPPORTED - Enginuity 5875
(SMI-S supports earlier Enginuity releases but vSphere 5 does not)
Now unfortunately I do not have access to a Symmetrix to complete my testing, however the release notes for SMI-S state the following which makes it sound very easy.
When started, the SMI-S Provider automatically discovers all Symmetrix storage arrays connected to the host on which the Array provider is running. No other action is required, such as running a symcfg discover command.
As mentioned earlier Symmetrix discovery is done in-band through small gatekeeper LUNs presented to the SE / SMI-S server. If it is a virtual server then ensure that the LUNs are presented to the VM as physical mode RDMs. The SMI-S release notes has the following to say about best practice.
When using the SMI-S Provider to manage Symmetrix arrays, it is recommended that you configure six gatekeepers for each Symmetrix array accessed by the provider. Only set up these gatekeepers for the host on which the SMI-S Provider is running.
So in theory it should be as simple as completing the following steps
Present the gatekeeper LUNs to the server (physical or virtual)
Restart the ECOM windows service to restart the SMI-S provider (auto discover arrays)
Use testsmiprovider.exe tool, run the dv command, verify Symmetrix array is attached.
Thanks to my colleague Cody Hosterman (who does have a Symm) for the screenshot.
One point to note, if you have SMI-S installed on the same host as the EMC Control Center (ECC) Symmetrix Agent or Symmetrix Management Console (SMC) there are a couple of steps you need to take to avoid some spurious errors. Check out page 37 of the SMI-S v4.3.1 release notes for further information on the changes required to avoid this.
I think the important thing to remember here is that this is version 1.0 of VASA. It may not be the most elegant solution in the world but it is a start on what I think will become a key feature in years to come. We are fast moving into an age where VMs become objects where we simply check a box to ensure our requirement or service level is delivered. Imagine a scenario where a VM is created and as part of the creation process you select the storage based on the VASA information passed up to vCenter from the array. Do I want it on a RAID 5 or RAID 6 protected datastore? Do I want it on a RecoverPoint replicated datastore? Do I want it on a vPlex distributed datastore? Do I want it on a datastore that is SRM protected? Although it is v1.0 you can see the potential use cases for this feature in the future are going to continue to expand.
Some of you may well have seen Chad Sakac’s blog post back in September entitled Help us make VASA (and EMC’s VASA provider) better! It includes a questionnaire with questions about what you, the end customer wants to see from VASA. This is a great chance to have your say and influence how EMC implement VASA going forward, lets make v2.0 of VASA a feature that delivers on the huge potential V1.0 has shown.
I come into contact with a lot of IT products throughout my day job, some are introduced to me by customers, some by colleagues and some by EMC Partners. Monday was no different as I got chatting to an EMC partner who was sitting opposite me in the office, naturally the subject turned to the product his company makes. The company in question is Axxana and the product they make is called Phoenix System RP™, a product that is designed to deliver zero data loss but in a very different way to the traditional Recovery Point Objective (RPO) = Zero infrastructure you’d expect.
Zero Data Loss = Synchronous Replication
Traditionally Zero data loss is delivered using synchronous replication technology and due to the costs involved it tends to be reserved for the most key mission critical systems. With synchronous replication when an application writes data to the storage, that data has to be written to both storage locations before the application receives a write acknowledgement (see below). As you can imagine when doing this between two physical sites application latency becomes a key consideration and as such these setups are usually backed by expensive low latency inter site fibre connections, not cheap!
It’s also worth noting that this latency consideration usually restricts the distance between the production and secondary site. This can often still leave you exposed to possible outages, i.e. natural disasters that could impact both sites. To mitigate this companies often replicate to a third site asynchronously, more leased lines, more storage, more management overhead and generally more expense!
That extra expense however can often be justified! In my previous role in the finance sector, trading systems or back end pricing warehouses were usually set up in this manner due to the potential cost of service loss or data corruption. Data consistency and RPO was always the key requirement when recovering from an outage, RTO being the obvious runner up. When talking to application owners the message I was often given was “as long as the data is correct I don’t care how long it takes you to get it back”. Obviously they did care about RTO, but recovering a system in 1 hour only to find that the data is inconsistent was not an acceptable outcome post outage.
Zero Data Loss = Asynchronous Replication
So how is Axxanna different? what is it they do that allows for zero data loss while using asynchronous replication. Well first of all it’s important to point out that this product integrates with EMC’s RecoverPoint replication product to provide the asynchronous replication. RecoverPoint is a product that works by splitting the write I/O for any protected LUNs, journaling it, compressing and de-duping it before replicating it with write order fidelity on the target storage location.
Axxana adds another level to this process which you can see in the diagram below. The first step is to mark a RecoverPoint consistency group as Axxana protected. Once this has been selected the writes that are usually just synchronously written to the local RecoverPoint appliance (RPA) are also then written synchronously to the Black Box via the Axxana collector servers. The collector takes the block stream adds some consistency checking meta data and then encrypts and writes the data out to the Black Box for safe keeping. An acknowledgement is only sent back to the application once the write has been committed to both the RPA and the Black Box in order to guarantee zero data loss. At the same time the RecoverPoint appliance is replicating the data asynchronously across to the DR site as normal. The key point here is that a combination of the Asynchronous replicated data to the second site and the data held within the Axxana Black Box on the primary site can be merged to create the equivalent of a synchronous data set at DR.
The Phoenix Black box can contain up to a 300GB SSD so it is capable of storing a lot of RecoverPoint data. This capacity makes it a perfect solution for protecting against WAN failure scenarios as well as data centre / application / storage failure scenarios. While the WAN link is down the RecoverPoint data is being synchronously played into the Black Box thus maintaining your zero data loss DR Protection.
The disk capacity raised an interesting point for me, how does the Axxana solution know to expire files from the disk inside the Black Box? I dug a little deeper and spoke to someone at Axxana and they told me the following.
In the initial configuration of an Axxana protected CG, Axxana gets an initial lag size from RP and configures an initial buffer of the same size (+ 10%) on the Black Box SSD. The blocks received from the RP are written cyclically to this buffer. This way we maintain only the last blocks (the delta) in any given moment. The Black Box buffer size is adjusted dynamically according to the changes of the RP lag.
So it decides the space allocation based on the RecoverPoint lag, i.e. the amount of data waiting to be replicated to the secondary site. Dynamically expanding that space allocation allows it to deal effectively with replication lag spikes or WAN link loss, pretty impressive stuff.
So that’s how it functions at a high level for the protection, the next question is what happens if my production site is hit by a disaster?
Axxana Black Box Construction
So I’m guessing that you’re now thinking how on earth am I going to recover the data if its stored on a piece of infrastructure in the Data Centre that has just been burnt down / hit by a plane / insert disaster here / flooded ?? Surely putting it in the primary data centre goes against all data protection logic! Technically you would be right, but you need to understand how this thing is constructed to see why that isn’t going to be a problem.
Axxana describe the Phoenix system as an Enterprise Data Recorder (EDR) based on technology from the aviation industry, i.e. plane black box flight recorders. It’s built as a hardened disaster proof storage device to ensure that the synchronous data held within it remains intact no matter what disaster befalls your data centre.
It’s constructed of 3 main layers, protection levels and pictures of the layers can be seen below.
Electronic Box – water protection
Cylinder – shock protection
Fire protection box – Well the name says it all really!
So while the rest of the data centre is a smouldering wreck you can quite happily set about retrieving your data for recovery at the DR site.
So how do you get the data back in the event of a disaster, well that’s where things get interesting and as someone who loves technology I think this next part is pretty cool.
First of all you need to physically locate the system, you do this by tracking the homing signal installed within the Black Box. Once you have found it you can then connect a laptop with an Axxana software component installed and extract the data. Now a physical connection is all well and good but what if the police or fire brigade won’t let you anywhere near the site let alone dig through the rubble looking for your Black Box! Well that is where the 3G – 3.5G phone transmitters comes in handy, allowing you to transfer the data from the Black Box using mobile phone technology.
The Black Box obtains an IP address from the nearest mobile phone base station and use it to communicate over the internet with the Axxana Recovery servers. The Recoverers can be either wired or wireless connected to the internet. Every interaction between the Black Box and the Recoverer is mutually authenticated using RSA 1024 bit protocol. all data that is sent over is encrypted using AES 128 bit protocol with a dynamic key exchange mechanism that automatically changes for every block of 32MB.
It’s all very clever stuff, I have to admit I am impressed with both the concept and the end product itself, I would love to speak to someone that has used it in anger.
So this is a product I saw a few years back when I was a customer, it was shown to me by EMC as part of a RecoverPoint sales pitch. I remember at the time thinking it was a pretty cool idea and a pretty full on cast iron way to guarantee the protection of critical data, however I couldn’t see a use case for it outside large enterprises. After talking about it for a while the other day I realised that back then I was potentially missing one of the key selling points.
You utilise Axxana so you don’t have to do expensive synchronous replication, so you don’t have to introduce unnecessary application latency, so you don’t have to have that second site within ~100KM distances. The reason this product is built to withstand every feasible disaster is so that you can safely use cheaper asynchronous replication over large distance and still guarantee that synchronous replication RPO that the business or application owner demands.
I swear one of those imaginary light bulbs went on above my head while I was discussing it!
If you want to know more about this product check out the Axxana website or please speak to your EMC account manager about the product. Alternatively you can drop me an email and I’ll find someone to talk to you about it.
I’ve recently been looking at the implementation of EMC’s free Virtual Storage Integrator (VSI) with a few our older Symmetrix customers. Now customers using VMAX and VMAXe have the ability to deploy delegated storage provisioning for their VMware admins. However DMX customers only have the ability to use the read only storage viewer functionality as the DMX is not supported with Storage Pool Manager (SPM) which back ends the storage provisioning. Some interesting questions came up recently with a customer about how best to deploy the VSI storage viewer with DMX arrays and I thought it would be worth sharing the findings with a wider audience. Basically I’m looking to cover off the different ways the VSI can connect to a Symmetrix array and how some of the options selected affect end to end operations.
VSI to Symmetrix Connectivity
So the VSI tool can be used in two ways with Symmetrix arrays, you can utilise the local Solution Enabler (SE) installation that comes with the VSI or you can use a dedicated Solution Enabler server. It’s important to remember that Symmetrix arrays can only be discovered in-band, basically this means the SE install needs direct connection with the physical array. This is achieved through the presentation of small LUNs known as gatekeeper LUNs, something existing Symmetrix storage teams will be very familiar with. So lets look at the two different possible setups.
Local Solution Enabler Deployment
The local deployment model shown above would require a gatekeeper LUN being presented / zoned through to the machine that the VI Client, VSI and local SE install have been deployed on. Communication with the array in this instance flows directly between the client PC and the array. In the majority of instances this isn’t going to be very practical for a number of reasons.
- Each VMware admin client with VSI deployed would need a direct array connection.
- Most Symmetrix arrays are FC attached and client PC’s are not.
- Arrays live in remote data centres and VMware admin PC’s live in the office.
- Increased security risk, i.e. too many direct array connections to secure
The remote deployment model shown above would require gatekeeper LUNs being zoned through to a dedicated server. VMware admins would then connect through this remote SE server when querying Symmetrix arrays from information with the VSI. Communication flow in this instance always goes through the server, however as you’ll see later results can be returned from the SE server or the array depending on VSI configuration. This kind of setup is more practical for a number of reasons.
- Remote SE servers are usually already in place for storage management tasks.
- Available as a virtual appliance for quick deployment if not in place already.
- Supports connectivity by multiple remote VMware admins using VSI.
- Manage multiple Symmetrix arrays through one server.
- Decreases security risk, i.e. single device connection to array.
Mix and Match
The model above is by no means rigid, you can craft a number of solutions out of the principals shown above. If your vCenter server sat in the same data Centre as the array then you could present gatekeeper LUNs to it and use this as a management point whenever you want to get information from the array. Another possible solution is to put a management virtual machine in the datacentre with the VI Client and VSI installed and present a gatekeeper as an RDM, whenever a VMware admin needs information from the array they connect into that management VM to carry out the work. Basically there is a solution for deploying VSI with Symmetrix arrays no matter what you’re setup looks like.
VSI Discovery Process Flow
One question that did come up recently was what happens when you select the AutoSync option for a symmetrix array and you are using the remote SE server solution. How often does it poll the array? Well the answer is it doesn’t, which is strange as the term Autosync gives the impression that it syncs with the array on a regular basis. So how does it work?
When AutoSync is enabled each time you request array data, e.g. clicking on the EMC VSI tab for a datastore. The request forces the SYMAPI database on the remote SE server to be updated from the array, the up to date array information is then returned to the VSI. There is obviously a slight cost involved in doing this as the remote SE server needs to make the calls to the array in order to update it’s local database before responding. Typically this would introduce a 10 – 20 second delay but that cost means you guarantee the information received is up to date and valid.
When Autosync is disabled each time you request array data the request is returned from the cached information in the local SYMAPI database on the remote SE server. This is obviously the fastest method as you don’t have the cost of querying the array directly for an update but the information may be out of date.
With Autosync disabled it’s up to the VMware administrator to initiate the sync of the array from within the VSI. Alternatively the storage team can initiate sync with the array directly through the SE server using SYMCLI. To initiate a sync manually go into the VSI tool and select Symmetrix arrays from the list of features, highlight the array and click on Sync Array.
The free EMC VSI Storage Viewer tool can be of great benefit to Symmetrix customers, allowing VMware admins improved visibility of the underlying storage layers. In larger environments where Symmetrix arrays are traditionally used you tend to find VMware and Storage are managed by separate teams. Anything that improves the information flow between the two teams during troubleshooting has to be a must have tool. As show above some thought needs to be given to how you set it up. My personal preference would be to always go for the remote SE server solution. Enable Autosync if your underlying VMware storage environment changes often and if it doesn’t then a manual sync every now and again should suffice.
Additional notes and links
It’s worth noting that SPC-2 flags need to be set on the FA port or on the initiator of the ESX host for the VSI to work correctly, in fact this is a required setting for ESX generally. This has come up a couple of times recently so I though it worth mentioning to ensure that people have it setup correctly, the following whitepaper gives you more information.
VSI installation Media
Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)
Solution Enabler Media
Home > Support > Software Downloads and Licensing > Downloads S > Solutions Enabler(Please Note: PowerLink account required)
Solution Enabler Documentation
Home > Support > Technical Documentation and Advisories > Software > S > Documentation > Solutions Enabler – (Please Note: PowerLink account required)
At EMC the vSpecialist team often end up talking to a lot of customers about EMC’s FREE Virtual Storage Integrator (VSI) Plug-ins for vCenter Server. Not only do customers love the fact that it is FREE they also love the features delivered. The ability to accurately view, provision and manipulate EMC storage directly within vCenter empowers VI admins and makes everyone’s life that little bit easier.
When I started writing this article we were on version 4.2 of the VSI plug-ins, following VMworld 2011 we are now up to version 5.0 the fifth generation of this excellent VMware / EMC toolkit. The plug-ins that make up the VSI are listed below, to download use the link below or use the cookie trail to navigate to the page on EMC PowerLink.
VSI Storage Viewer Plug-in 5.0
VSI unified Storage Management Plug-in 5.0
VSI Storage Pool Management Plug-in 5.0
VSI Path Management Plug-in 5.0
Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)
One of the great features that people are drawn to is the ability to allow VI admins to provision storage directly from within vCenter. This is done with the VSI Unified Plug-in for Celerra, CLARiiON and VNX(e) and done with the VSI Storage Pool Management plug-in for the VMAX. One of the first question I often get asked is how is the secured, how does the storage team ensure that only the right VMware admins are manipulating the underlying storage?
The answer previously was… well to be honest we didn’t really have an answer to this one. Technically if you allowed the VMware admins to provision storage you needed to trust them not to go provisioning crazy and fill up your storage array. Obviously that response was not really acceptable for any environment and EMC have been working to rectify that.
The Access Control Utility is a new part of the VSI framework which allows storage administrators to granularly control availability of storage platforms and storage pools on those platforms. These security profiles when created can be exported and passed to the VMware administrators and imported into the VSI unified storage management plug-in. The following blog post details the steps involved in completing this process for a VNX array in vSphere 4.1
So we start by double clicking on the shiny padlock icon that will have been added to your desktop when you installed the VSI unified storage management plug-in. When the ACU starts we are presented with the profile management screen. This will of course be blank the first time you start the utility, in this screenshot below however you can see a couple of existing access profiles I have created for some VNX arrays in the lab.
To Create a new profile you simply click the Add button, you are then presented with the details screen for the new access profile being created. Here you enter the name of the profile and a suitable description and click next when finished.
The next step in the wizard is where you define the storage system that will be permissioned as part of the security profile. You click on Add and then select the system you are going to permission, as you can see the VSI ACU supports Celerra, CLARiiON, VNX and the VNXe arrays. For VMAX you need to look at Storage Pool Manager (SPM) to control access, I’ll look to blog about this one at a later date.
The next screen presented very much depends on the storage system you select. If you chose the Celerra option you’re prompted for the details of the control station, username and password. Select the CLARiiON and you’re prompted for the Storage Processor details and login credentials. If you select the VNXe then you’re promoted for the management IP and the login credentials. I’m sure you can see the pattern developing here!
In this example we are dealing with a VNX array and as such the option is whether you want to give access to block storage, file storage or both. As both are controlled differently within the VNX, if you select both you will need to enter the IP and credentials for the Storage Processor (Block) and the VNX Control Station. For the purposes of this example I’m going to use Block only as you can see in the screenshot below.
Once you are authenticated you get to select the granularity of access you want to provide. It’s important to note that when the ACU refers to storage pools it means any storage pools and traditional RAID groups that may have been created on the VNX array. There are 3 options available as you can see in the screenshot below.
All storage pools
This option basically gives a VMware Admin free reign to provision LUNs with the VSI all over the array. A potential use case for this may be a dedicated development VMware environment with its own dedicated array where the storage team don’t care to much about usage.
No Storage Pools
This option is a complete lockdown and acts as an explicit deny to prevent any accidental provisioning on an array, i.e. the VSI unified storage management feature cannot talk to the array full stop, it won’t even show up as an option.
Selected storage pools
As the name indicates this option allows the selection of certain storage pools for VSI provisioning. A potential use case here would be a mixed environment where the array is shared between VMware and physical workloads. As a storage administrator you would grant permission to the VMware storage pools only thus preventing any potential mis-provisioning (not sure that is actually a word but it certainly has its place when we talk about VSI provisioning)
In this example I’ve chosen selected storage pools as I think this is probably the scenario that most people will be looking for the ACU to help them with. Within the next screen you are presented with a list of all storage pools / RAID groups on the array. Here you select the storage pools / RAID groups you want to give the VMware admin access to, when your happy with your selection you simply select finish. Note in the screenshot below that I have select two individual storage pools (one is a RAID group) to be part of this particular storage profile.
Once you’ve completed storage pool selection you are returned to the profile screen, you can finish your profile creation right here by clicking on finish or you can add additional storage systems if your VMware environment consists of multiple arrays.
Once you have completed the creation of your security profile the next step is to export it so you can pass it over to your VMware admins. To do this simply highlight the Security profile, click on export and save the file
Chose a location to save the file and don’t forget to add a passphrase to the file so that it cannot be misused.
It’s important to remember that the login credentials provided by the storage admin during the ACU profile setup are the ones used when the profile is imported into the VSI. The VMware admin will see the connection details and username being used but will not see the password. For audit purposes on the array it may be best to setup a dedicated account for use with the VSI and storage profiles. It should also be noted that the full details of the storage profile are encrypted within the profile export file as you can see below.
So now that you’ve finished creating your storage profile you can pass it on to the VMware administrators to import into the VSI. To do this you go into vCenter and open up the EMC VSI screen from the home screen. Click on the Unified Storage Management feature, then click on add and select Import Access Profile before clicking next.
You now select the XML file created by exporting the ACU storage profile, you enter the passphrase you selected and click next.
As you can see below the VNX array has been added to the VSI and provisioning access is marked as Restricted. This is as expected as we configured the profile to give access to only two storage pools, FAST_Pool_3_Tier and RAID Group 10.
When you use the EMC VSI to provision storage you will be presented with the VNX array that was part of the imported profile. You select the storage array and as you can see in the screenshot below you can only create storage on the two storage pools that were added to the ACU storage profile.
The EMC Access Control Utility was something I have been looking to write about for a while. Since it’s release I’ve often wondered how exactly it worked, what it could / could not do and how it could better meet customer needs. The steps above show that it is possible for a storage team to delegate control of storage pools so VMware admins can quickly provision the storage that they need. Becoming more efficient is something we as vSpecialists talk about on a daily basis, this tool is one of those first steps that you can take to make life easier. If you are a VMware admin who is working with EMC storage then I suggest you speak to your storage team about this. Likewise if you are a storage admin, reach out to your VMware counterparts and discuss how this could save you both time in the long term.
My boss Chad Sakacc put a video together for VMworld 2011 which maybe explains it better (certainly quicker) than I maybe have in this blog post. I left it to the end though so you read the article before discovering it . My step by step approach is simply so I can fully understand how it fits together and as I go deal with the many “what if” or “how does that work” kind of questions. Hope you find it useful in some way, feel free to comment or ask questions.
Since I started at EMC just over 2 months ago I’ve been spending a lot of time getting to grips with the large range of products that EMC has in it’s portfolio. One key product I’ve been lucky to spend some time learning about it is VMAX / Symmetrix. A Product range I knew a little about but had never used as a customer or had the chance to deep dive technically. Luckily for me I got the chance to do the deep dive with 3 days of VMAX training with some of the Symm engineering team from the US.
During this VMAX training some of my more “Symmetrix savvy” colleagues (David Robertson and Cody Hosterman) were telling us about something called EMC StorReclaim. At the time I couldn’t say anything about it as it wasn’t due for unveiling until EMC world but I did take notes with the aim of following up after EMC world. I only found those notes today hence the delay folks!
First things first, this is an EMC internal tool for EMC Global Services usage. I am publishing this in order to bring it to your attention. If you feel you have a need for using this EMC product then please speak to your EMC Rep / TC for more details.
So what is EMC StorReclaim? I could explain it myself but this extract from the release document explains it perfectly.
StorReclaim is a Windows command line utility designed to free allocated but unused storage space as part of EMC’s Virtual Provisioning solution.
StorReclaim determines exactly where the allocated but unused space is and then passing that information on to Symmetrix for space reclamation. Once the storage capacity is reclaimed, it is then put back into the device storage pool for consumption. The process is performed in real time and does not require any running application to shutdown.
So what does it support / not support and what can I run it on?
Host operating system requirements
StorReclaim is fully supported on the following operating systems with various Service Packs.
◆ Windows Server 2003 x86/x64
◆ Windows Server 2008 x86/x64
◆ Windows Server 2008 R2
◆ Windows Server 2008 R2 with SP1
StorReclaim also supports Windows Guest OS running on:
◆ Hyper-V Server 2008 or 2008 R2
◆ VMware ESX Server 3.5 or 4.0
◆ VMware vSphere client 4.1.1
Note: For VMware ESX, the physical disks in a virtual environment must be attached to virtual machines using RDM (Raw Device Mapping). For Microsoft Hyper-V, the physical disks must be configured as pass-throughdisks.
File system requirement
Microsoft Windows NTFS
MBR & GPT
Dynamic Concatenated, Mirrored (excludes striped / RAID 5 Dynamic disks)
Logical volume managers requirement
Microsoft Windows LDM
Storage environment support
StorReclaim supports Symmetrix arrays running Enginuity 5875 and higher.
Supported with EMC Clone and SNAP but only de-allocates storage on the source
Just to clarify one of the points around virtualisation support. This tool does support both physical and virtual windows server workloads. Key point here is that the virtual machine in question must have RDM attached disks served from a thin pool.
One other key point worth mentioning is that this tool does not require you to install Solution Enabler or any other EMC host based software. The tool works via a windows filter driver and sends SCSI UNMAP commands directly to the VMAX array in order to return the blocks to the thin pool.
As I mentioned earlier if you want access to this tool you will need to obtain it from EMC Global Services. I hope that this will be released as a customer tool at some point in the future, this decision may well be based on demand so please ask if it is something you could use.
A common theme that I see coming up up time and time again with customers is VDI using Citrix XenDesktop and vSphere 4.1. It’s popularity generally stems from the previous success companies have had with the more traditional Citrix products such as Presentation Server / XenApp. I know when I was looking at VDI solutions I was very much in favour of Citrix due to one thing, the ICA protocol. It works and it works well over long distances, in a lot of companies it has proven itself over a long period of time, it is a protocol they trust to deliver.
Following a customer meeting recently I was desperately searching for an EMC reference architecture (RA) for a XenDesktop / vSphere deployment. At the time it turned out we didn’t have a completed one, we did however have one in draft format that was going through the final stages of review. That RA has now been completed and released for public consumption, an overview of the documents purpose is below.
The purpose of this reference architecture is to build and demonstrate the functionality, performance and scalability of virtual desktops enabled by the EMC VNX series, VMware vSphere 4.1 and Citrix XenDesktop 5. This solution is built on Machine Creation Services (MCS) in XenDesktop 5and a VNX5300 platform with multiprotocol support, which enabled FC block-based storage for the VMware vStorage Virtual Machine File System (VMFS) and CIFS-based storage for user data.
The RA covers the technologies listed below and details why the VNX array with FAST cache enabled is a perfect match for your Citrix VDI deployment. One other interesting area that is discussed is the use of Citrix Machine Creation Services (MCS) which is a new feature XenDesktop 5 and provides an alternative to Citrix Provisioning Server (PVS). For those new to MCS I suggest you have a read through the following Citrix blog post as their are some additional design considerations around IOPS that need to be considered.
Citrix XenDesktop 5
Microsoft Windows 7 enterprise (32-bit)
Citrix Machine Creation Services (MCS)
VMware vSphere 4.1
EMC VNX 5300 – (Fast Cache & VAAI enabled)
EMC virtual storage integrator (VSI) – Free on EMC PowerLink
If you are considering XenDesktop 5 and vSphere 4.1 then I suggest you download and have a read through the RA linked below.
For VMworld 2011 this year you, the public can have your say on what sessions you would like to see at the show. Session voting has opened today and will run up until the 18th of May. This is the first time I have submitted a topic for VMworld, I’m actually really excited that the submission has made it to the public vote. I’m extremely lucky to be joined by an ex-colleague of mine who’s passion for virtualisation often even exceeds my own. I have no doubts that this session will prove useful to a lot of people out there on their personal journey to the cloud.
To vote simply locate the session, click on the “Thumbs Up” symbol next to the Session and you will receive confirmation that your vote has been registered. Thanks for taking the time to read this and of course for voting.
2805 Virtualizing Mission-Critical Tier 1 Microsoft SQL Applications.
VMware vSphere has proven to be the most robust platform for the virtualization of business applications, yet some organizations remain reluctant to virtualize their mission-critical Tier 1 Microsoft SQL applications. Come to this session and learn best practices for virtualizing MS SQL with vSphere developed through real-world customer implementations. This session will cover analyzing current workloads, benchmarking virtual infrastructure, choosing appropriate hardware, and making the right software configuration choices to ensure the success of your key business applications on vSphere.
I hope that by now all of you have heard of EMC’s VMware Storage Integrator (VSI). This is EMC’s FREE vCenter plugin that offers the VMware administrator the ability to interact with EMC storage directly from vCenter. If you haven’t heard about it then check out this blog post by my boss Chad Sakacc to learn more.
So back to my original topic, this morning I’ve been reading about a great new product that is currently being work on by EMC to help simplify storage provisioning for Windows administrators. It’s simply called the EMC Storage Integrator (ESI). Right now you would probably have had to contact multiple teams or use the following tools to provision and configure storage to a windows server.
Array Management Tools (Unisphere / Navisphere)
Windows Server Manager
Failover Cluster Manager
ISCSI initiator / FC Switch Management Tools
With ESI you can now provision storage as well as conduct some additional application specific functions from one MMC console, simple, easy, delegated self service storage management. Current supported capabilities include the following.
- Provision, format and present drives to windows server
- Provision new cluster disks and add them to the cluster automatically
- Provision shared CIFS storage and mount it to windows server
- Provision SharePoint storage, sites and databases in a single wizard.
At the moment this tool is currently in beta testing so isn’t widely available just yet. When it does become generally available it will support all EMC storage (VNX, VNXe, VMAX, CX and NS) it will also be FREE in exactly the same way as the EMC VSI tool.
Check out the demo put together by the EMC team responsible, ahhh integration goodness!
I got an email last night from my very, very busy EMC colleague Simon Seagrave. He’s been working hard with the rest of the EMC tech enablement team to prepare the vSpecialist vLab sessions for EMC World in Las Vegas.
Let me just say, they have worked wonders and have a superb floor show prepared for all you who will be attending from May 9th onwards. They have created a 200 seat hands on labs covering the EMC products shown below, something for everyone I’m sure you’d agree.
To attend one of the vLabs then simply register at the console just outside the vLab room. Sign up for a lab and when it’s your turn your name will flash up on the big screen and a vSpecialist will take you to your seat. Nice simple process and you may well find that it’ll be me escorting you to your seat. C’mon people get involved!
Some time ago I wrote a blog post about Microsoft Virtual Desktop Access (VDA) licensing that was introduced back in July 2010. For those that don’t want to read the whole article the summary of VDA was as follows.
- You need to licence the endpoint accessing a windows VDI Desktop.
- It’s £100 per year per endpoint.
- Multiple endpoints each need a licence, i.e. home PC, office thin client, iPad
- VDA included if endpoint is Windows and is Software Assured
I remember at the time thinking that this was going to hinder VDI deployment projects. The additional on-going cost of licensing every potential endpoint a user may use was going to push TCO up, increase the time for ROI to be realised and generally make VDI a very unappealing prospect. Don’t even get me started on how difficult this makes it for service providers to create a Windows Desktop as a Service offering.
Recently one of my esteemed colleagues at EMC (another vSpecialist by the name of Itzich Reich who’s blog you can find here) sent out an email about Microsoft releasing a customer technology preview (CTP) of a product called Windows Thin PC (WinTPC). In summary this is a slimmed down version of Windows 7 and is designed for the re-purposing of old PC equipment as thin client devices.
It has a couple of features worth mentioning for those technically minded people out there.
- RemoteFX support for a richer, higher fidelity hosted desktop experience.
- Support for System Center Configuration Manager, to help deploy and manage.
- Write filter support helps prevent writes to disk, improving end point security.
WinTPC and / or VDA
So how does this new product fit in with the rather expensive VDA licensing? Well the good news is that WinTPC can be used to access a VDI desktop without the need for a VDA licence. On the downside WinTPC will only be available as a benefit of Software assurance for volume licensees. Now seeing as the VDA licence doesn’t apply to an endpoint that is windows based and covered by software assurance it makes no real difference from a licensing point of view which option you go for. So if you have software assurance the choice is yours, if you don’t, well coughing up for VDA licences each year is your only option I’m afraid.
What WinTPC does allow companies to do is maximise existing PC hardware investments. This should allow companies to offset some of that initial upfront cost often associated with VDI projects. Microsoft’s idea is that companies can try out VDI using WinTPC and existing PC assets, when these PC’s become end of life they can swap over to using windows embedded devices without needing to change the management tools. Now VDI is not cheap, capital costs can be high, savings are usually made in operational and management costs later down the VDI journey. As I mentioned at the start of this post, the VDA licence has not helped VDI adoption as it increases both capital and operational costs due to it’s annual subscription cost model. Will this new release from Microsoft help reduce costs?
My opinion, I personally think Microsoft are in a tricky position, they’re somewhat behind the curve on the VDI front and I always felt the VDA licence was designed to slow VDI adoption while they gained some ground on the competition. If anyone chose to forge ahead, regardless, well Microsoft would generate some nice consistent revenue through the VDA licence. So the prospect of a WinTPC release is a nice touch by Microsoft during these hard economic times but not everyone can benefit. What I would like to see is Microsoft offer this outwith Software Assurance, sell it as a single one off licence cost as an alternative to the annual subscription model used with the VDA. Give your customers the choice and let them get on with their VDI journey, be part of it as opposed to being the road block!
If you are interested in learning more, check out the links below. To download the CTP version of WinTPC then go to Microsoft Connect and sign up to download it, would love to hear what you think.
So 2010 is already almost over and now that I have a little spare festive holiday time I thought it would be a good chance to reflect on my year of blogging. I have to say I was also inspired by Eric Gray over at vCritical who had written a top 10 posts article, I thought that might be quite interesting to add in as well.
It’s been a great year for me this year, I maybe haven’t blogged as much as I would have liked to due to work commitments and the small matter of a month long trip to Alaska to indulge in my love of snowboarding. However I have had some tremendous experiences along the way this year while blogging when I could.
Highlights for me included my invite to the GestaltIT Tech Field Day in Seattle, where I got to meet a number of other bloggers from across the world and was given the chance to see and comment on some very interesting existing and new technologies.
My trip to VMworld 2010 in San Francisco was also another great chance to finally meet up with other bloggers. The Tweet-up was a superb chance to meet the likes of John Troyer, Simon Seagrave, Eric Gray and many others. VMworld itself was once again a superb educational event, I took part in two great group sessions, one with Chad Sakacc and one with Scott Drummonds. Great new format for 2010 and one that I will definitely be revisiting if I get the chance to attend again.
On the back of my VMworld trip I was also invited to talk at the Scotland VMUG, quite a daunting experience if I’m honest. However it turned out to be something I really enjoyed and would be happy to do again. I even enjoyed the difficult questions thrown at me by Mike Laverick, all of them great conversation starters.
Top 10 posts – 2010
|1. vSphere ESX4 on a USB key / Pen Drive||6524 views|
|2. Virtualisation Visio Stencils – Microsoft, VMware, Citrix||4707 views|
|3. EMC Navisphere Simulator Download||4689 views|
|4. VMware Remote Console - Vi3, Vmware Server 2||3702 views|
|5. Virtualisation Visio stencils||3395 views|
|6. How to run Citrix XenServer 5.5 on VMware vSphere||3379 views|
|7. vSphere 4.0 - What’s new in vSphere Storage||1854 views|
|8. Where to start with your VMWare ESX Whitebox||1720 views|
|9. VMware Visio Pack available - VIOPS||1672 views|
|10. vSphere vMotion Processor Compatibility and EVC Clusters||1647 views|
Its interesting that that people prefer the practical how to or information linking posts as opposed to the review or opinion pieces. I will keep that in mind for my 2011 blog postings.
A massive thank you to everyone who has been a visitor to this site over 2010. All that remains to be said is have a great New Year and all the best for 2011.
EmcpEsxLogEvent: Error :emcp :MpxEsxPathClaim :Could not claim the path. Status : FailureScsiPath: 3815: Plugin ‘PowerPath’ had an error (Failure) while claiming pathScsiDevice: 1726: Full GetDeviceAttributes during registration of device ‘naa.500…’ : failed with I/O errorScsiDeiceIO: 5172: READ CAPACITY on device “naa.50…’ from Plugin “NMP” failed. I/O error
dWhile pursuing twitter this evening I stumbled across a few tweets from people I follow @sakacc, @scott_lowe and @emccorp about a new EMC PowerShell toolkit. I’m actually a little surprised that it’s taken EMC this long especially with the success of the VMware PowerCLI. Its worth noting that EMC’s competitors have had offerings in this space for some time now. Compellent’s PowerShell toolkit has been available since late 2008 and NetApp’s PowerShell offering was announced earlier this year. I’m not going to hold it against EMC though, they are the current kings of innovation and who can blame them for dropping the ball slightly on the PowerShell front.
So the story goes that the EMC ® Storio Powershell Toolkit (PSTookit) has been available internally within EMC for a while now. EMC are now looking to increase it’s exposure by releasing a pre-release version of the EMC PSToolkit for testing and feedback. At present it only consists of a small subset of commands which you can see in the screenshots below.
There are a few Caveats that you need to be aware of for this pre-release version. The requirements below are taken directly from the EMC community post, I have included links for the downloads to make life a little easier.
- SMI-S Provider 4.1.2 or later versions – EMC PowerLink logon required
- PowerShell 2.0 CTP3 – CTP3 is quite old and not available anymore, link to 2.0 provided
- .Net Framework 3.5 – Advice is to utilise Windows Updates to update .NET Framework
- Windows XP, Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 for Array management commands. It is known that the 32-bit cmdlet set does not execute properly on Windows 7 and may not execute properly on Windows Vista.
Of course you will also need to download the EMC PSToolkit as well, You can find this over on the EMC Community site - https://community.emc.com/docs/DOC-8561. Remember to provide your feedback to EMC, you have their attention! What do you want to see added in here, what do you want to script, what are your use cases?
I’ve been building a few new HP DL 380 G7 servers as ESX 4.1 hosts and was having the usual nightmare finding anything on HP’s website. I was specifically looking for HP Insight Management Agent downloads and I’d been searching for ages when I came across a page called VMware from HP for Proliant. I’ve still no idea how I stumbled across it but I’m writing this down now so I never loose it.
The screenshot below which is from the bottom of the web page and lists all the HP Insight management agents for V4.x that I could possibly need.
As you can see from the next screenshot the web page covers all versions of ESX. It also has an extensive list of servers covering the BL, DL, ML and SL Proliant ranges. Click on the tick and it will take you straight to the appropriate page for that server so you can download everything you may need regardless of end operating system. So it acts as a support matrix and also acts as a collection of links to the correct product download page, can’t ask for much more than that. Good work HP, not often I say that about your web site.
Something has been bugging me for some time now about vCenter disk performance statistics. Basically vCenter shows each SCSI LUN with a unique ID as per the following screenshot. When viewed through the disk performance view it’s impossible to tell what is what unless of course you know the NAA ID off by heart!?
I was working on a project this weekend putting a Tier 1 SQL server onto our vSphere 4.0 infrastructure, therefore insight into disk performance statistics was key. So I decided I needed to sort this out and set about identifying each datastore and amending the SCSI LUN ID name, here is how I did it.
Identify the LUN
First of all navigate to the datastore view from the home screen within vCenter
Click on the datastore you want to identify and then select the configuration tab
Click on the datastore properties and then select manage paths
Note down the LUN ID in this case 2 and also note down the capacity
Change the SCSI LUN ID
Now navigate to the home screen and select Hosts and Cluster
Select a host, change to the configuration tab and then select the connecting HBA
At the bottom identify the LUN using ID and capacity noted earlier and rename the start of ID. I chose to leave the unique identifier in their in case it is needed in the future.
Now when you look at the vCenter disk performance charts you will see the updated SCSI LUN ID making it much more meaningful and useable.
Raw Device Mappings
If you have Raw Device Mappings (RDM) attached to your virtual machine then these to are capable of showing up in the vCenter disk performance stats. It’s the same process to change the name of the SCSI LUN ID however it’s slightly different when identifying them. To do so carry out the following.
Edit the settings of the VM, select the RDM file, select Manage Paths and then note down the LUN ID for the RDM. Use this to identify the LUN under the Storage Adapter configuration and change it accordingly.
Following making these changes I can now utilise the vCenter disk counters to compliment ESXTOP and my SAN monitoring tools. Now I have a full end to end view of exactly what is happening on the storage front, invaluable when virtualising Tier 1 applications like SQL 2008.
There are a plethora of metrics you can look at within vCenter, if you would like to understand what they all mean mean then check out the following VMware documentation.
So I haven’t done a lot of real time blogging at VMworld this year as I’ve been busy trying to see and soak up as much as possible. It’s not every day that you get access to the likes of Chad Sakacc (VP EMC / VMware alliance) Scott Drummond (EMC – ex VMware performance team) and a whole host of other technology movers and shakers. As you can imagine I took full advantage of these opportunities and blogging became a bit of secondary activity this week.
However, I’ve now had time to reflect and one of the most interesting areas I covered this week which was the new storage features in vSphere 4.1. I had the chance to cover these in multiple sessions, see various demo’s and talk about it with the VMware developers and engineers responsible. There are two main features I want to cover in depth as I feel they are important indicators of the direction that storage for VMware is heading.
SIOC – Storage I/O Control
SIOC had been in the pipeline since VMworld 2009, I wrote an article on it previously called VMware DRS for Storage, slightly presumptuous of me at the time but I was only slightly off the mark. For those of you who are not aware of SIOC, to sum it up again at a very high level let’s start with the following statement from VMware themselves.
SIOC provides a dynamic control mechanism for proportional allocation of shared storage resources to VMs running on multiple hosts
Though you have always been able to add disk shares to VM’s on an ESX host, this only applied to that host, it was incapable of taking account of VM I/O Behaviour of other VMs on other hosts. Storage I/O control is different in that it is enabled on the datastore object itself, disk shares can then be assigned per VM inside that datastore. When a pre-defined latency level is exceeded on a VM it begins to throttle I/O based on the shares assigned to each VM.
How does it do this, what is happening in the background here? Well SIOC is aware of the storage array device level queue slots as well as the latency of workloads. During periods of contention it decides how it can best keep machines below the predefined latency tolerance by manipulating all the ESX Host I/O Queues that affect that datastore.
In the example below you can see that based on disk share value all VM’s should ideally be making the same demands on the storage array device level queue slots. Without SIOC enabled that does not happen. With SIOC enabled it begins throttling back the use of the second ESX host’s I/O queue from 24 slots to 12 slots, thus equalising the I/O across the hosts.
Paul Manning (Storage Architect - VMware product marketing) indicated during his session that there was a benefit to turning SIOC on and not even amending default share values. This configuration would immediately introduce an element of I/O fairness across a datastore as shown in the example described above and shown below.
So this functionality is now available in vSphere 4.1 for Enterprise Plus licence holders only. There are a few immediate caveats to be aware of, it’s only supported with block level storage (FC or ISCSI) so NFS datastores are not supported. It also does not support RDM’s or datastores constructed of extents, it only supports a 1:1 LUN to datastore mapping. I was told that extents can cause issues with how the latency and throughput values are calculated, which could in turn lead to false positive I/O throttling, as a result they are not supported yet.
It’s a powerful feature which I really like the look of. I personally worry about I/O contention and the lack of control I have over what happens to those important mission critical VM’s when that scenario occurs. The “Noisy Neighbour” element can be dealt with at CPU and Memory level with shares but until now you couldn’t at a storage level. I have previously resorted to purchasing EMC PowerPath/VE to double the downstream I/O available from each host and thus reduce the chances of contention. I may just rethink that one in future because of SIOC!
Further detailed information can be found in the following VMware technical documents
VAAI - vStorage API for Array Integration
Shortly before the vSphere 4.1 announcement I listened to an EMC webcast run by Chad Sakacc. In this webcast he described EMC’s integration with the new vStorage API, specifically around offloading tasks to the array. So what does all this mean, what exactly is being offloaded?
Hardware assisted locking as described above provides improved LUN metadata locking. This is very important for increasing VM to datastore density. If we use the example of VDI boot storms, if only the blocks relevant to the VM being powered on are locked then you can have a more VM’s starting per datastore. The same applies in a dynamic VDI environment where images are being cloned and then spun up; the impact of busy cloning periods, i.e. first thing in the morning is mitigated.
The full copy feature would also have an impact in the dynamic VDI space, cloning of machines taking a fraction of the time as the ESX host is not involved. What I mean by that is when a clone is taken now, the data has to be copied up to the ESX server and then pushed back down to the new VM storage location. The same occurs when you do a storage vMotion, doing it without VAAI takes up valuable I/O Bandwidth and ESX CPU clock cycles. Offloading this to the array prevents this use of host resource and in tests has resulted in a saving of 99% on I/O traffic and 50% saving on CPU load.
In EMC Labs a test of storage vMotion was carried out with VAAI turned off, it took 2 mins 21 seconds. The same test was tried again with VAAI enabled, this time the storage vMotion took 27 seconds to complete. That is a 5x improvement, and EMC have indicated that they have had a 10x improvement in some cases. Check out this great video which shows a storage vMotion and the impact on ESX and the underlying array.
There is also a 4th VAAI feature which has been left in the vStorage API but is currently unavailable, Mike Laverick wrote about it here. Its a Thin Provisioning API and Chad Sakacc explained during the group session that its main use is for Thin on Thin storage scenarios. The vStorage API will in the future provide vCenter insight into array level over provisioning as well as the VMware over provisioning. It will also be used to proactively stun VM’s as opposed to letting them crash as currently happens.
As far as I knew EMC was the only storage vendor offering array compatibility with VAAI. Chad indicated that they are already working on VAAI v2 looking to add additional hardware offload support as well as NFS Support. It would appear that 3Par offer support, so that kind of means HP do to, right? Vaughan Stewart over at NetApp also blogged about their upcoming support of the VAAI, I’m sure all storage vendors will be rushing to make use of this functionality.
Further detailed information can be found at the following locations.
Storage DRS – the future
If you’ve made it this far through the blog post then the fact we are taking about Storage DRS should come as no great surprise. We’ve talked about managing I/O performance through disk latency monitoring and talked about array offloaded features such as storage vMotion and hardware assisted locking. These features in unison make Storage DRS an achievable reality.
SIOC brings the ability to measure VM latency, thus giving a set of metrics that can be used for storage DRS. VMware are planning to add capacity to the storage DRS algorithm and then aggregate the two metrics for placement decisions. This will ensure a storage vMotion of an underperforming VM does not lead to capacity issues and vice versa.
Hardware Assisted Locking in VAAI means we don’t have to be as concerned about the number of VM’s in a datastore, something you have to manage manually at the moment. This removal of limitation means we can automate better, a storage DRS enabler if you will.
Improved Storage vMotion response due to VAAI hardware offloading means that the impact of storage DRS is minimised at the host level. This is one less thing for the VMware administrator to worry about and hence smoothes the path for storage DRS Adoption. As you may have seen in the storage vMotion video above the overhead on the backend array also appears to have been reduced, so you’re not just shifting the problem somewhere else.
For more information I suggest checking out the following (VMworld 2010 account needed)
There is so much content to take in across all three of these subjects I feel that I have merely scratched the surface. What was abundantly clear from the meetings and session I attended at VMworld is that VMware and EMC are working closely to bring us easy storage tiering at the VMware level. Storage DRS will be used to create graded / tiered data pools at the vCenter level, pools of similar type datastores (RAID, Disk type). Virtual machines will be created in these pools; auto placed and then moved about within that pool of datastores to ensure capacity and performance.
In my opinion it’s an exciting technology, one I think simplifies life for the VMware administrator but complicates life for the VMware designer. It’s another performance variable to concern yourself with and as I heard someone in the VMworld labs comment “it’s a loaded shotgun for those that don’t know what they’re doing”. Myself, I’d be happy to use it now that I have taken the time to understand it; hopefully this post has made it a little clearer for you to.
I currently find myself sitting rather bored on the flight across to San Francisco for VMworld. Just to clarify, it’s certainly not the thought of VMworld that’s boring me, far from it in fact!Instead it’s the rather poor choice of in flight movies that’s got me thinking about what this VMworld is going to bring us and what am I going to take away from it at the end of the week.
It’s now been 3 years since my last VMworld visit and in that time things have moved on significantly in the industry. Back in 2007 VMware really had no competitors, Hyper-V hadn’t even been released yet. The phrase Cloud computing wasn’t being mentioned at every opportunity and the likes of VMware Fault Tolerance and Storage vMotion were still confined to the stage as keynote demo technologies. It’s mind boggling to think how much has changed in those 3 short years, what a very different landscape it now is!
So I’m wondering what can we expect from VMworld 2010? Well there has been plenty of speculation that VMware will announce View 4.5, I’m now taking that one as a given as it has been so widely commented on already. I’m also expecting lots on private and public cloud infrastructures and the transformation steps required to take it from concept to the real deal. All the big IT companies in the world are working towards this model, the concept is out there and now very much on the loose. I’m hearing more chatter about it from people who don’t even work in IT! On the Cloud front I am expecting to hear lots about how to overcome the fears, the risks involved, the security concerns and all those other things that have the sceptics worried and concerned. Oh and you just know that VMware will have a few little extra special announcements hidden up their sleeve, it wouldn’t be a VMworld without them.
From my own perspective I have a few angles to cover this year. My employer is the one funding my attendance and therefore I have a number of specific topics I need to cover off as part of my day job. Deploying Exchange 2010 in a vSphere environment for instance is one key area and another is looking at the transition from ESX to ESXi in advance of the service console retirement. On top of this I will be looking at the longer term strategic view, where is the industry going and are we pulling in the same direction or do we need to change tact?
As an aside, I have also been asked to present to the Scotland VMware User Group upon my return to the UK. The theme for the VMUG being “VMworld looking back, looking forward” which is meant to cover VMworld US and my experience of it, while also encouraging people to attend the VMworld Europe event in Copenhagen. Hopefully there will be plenty of good content for me to report back on but enough announcements held over to make VMworld Europe in Copenhagen appealing.
It’s shaping up to be a great week, I’m looking forward to getting stuck in tomorrow first thing. I Just hope the jet lag doesn’t hit me to hard as I have a feeling the various vendor parties and ad-hoc beers just might!
I recently purchased 3 new DL 380 G7’s for a new ESX deployment and as part of that purchase I had multiple options when it came to choosing power supplies. I tend to default to the larger power supplies when purchasing servers as usually this means it can support the maximum configuration, i.e. built in scalability.
Recently we had a service provider change their cost model from a flat fee to a metered power cost model. As a result I decided to take a closer look at the actual power consumption of the DL380 G7’s I was buying. That’s when I discovered another really useful HP online tool called the HP Power Advisor Tool.
This tool does still appear to be a work in progress, as some features are not available when you click on it, but in general there is still a lot of very useful functionality in there. You can build a single server, configure it and get your power specifications based on the components within. This is what I did with my DL 380 G7’s as you can see by clicking the thumbnail below.
If your configuration is a little more advanced, you can drop in a rack and then configure it with all manner of HP Server goodness. You can even drop in blade enclosures and then configure the interconnects and individual blades as you can see by clicking the thumbnails below.
I’m quite impressed with some of the extra tools I’ve been finding on HP’s website, I blogged recently about the HP Server DDR3 Memory Configuration Tool which helped me out when a reseller was trying to give me the wrong memory configuration. I’m just wondering what I’m going to find next!!
So it’s that time again, another Scottish VMware User Group meeting is upon us. This time there is a slight difference, I’m actually presenting which I have to say is not daunting in the slightest?!?!
I’ve been asked to present on my experience at VMworld 2010 in San Francisco, an event I am looking forward to immensely. I’ll be aiming to tell you all about VMworld, it’s format, what I did there and why you should invest both the time and the money to attend future events.
If you are interested in registering for this VMUG meeting please register soon, I have it on good authority that 50% of the places were snapped up within 24 hours of the invite going out. Hopefully I’ll see you there.
Please join us for the upcoming Scottish VMware User Group meeting on Thursday, September 23rd.
This is a great opportunity to meet with your peers to discuss virtualization trends, best practices and the latest technology.
Introduction (Scott Walkingshaw)
VNews (Alistair Sutherland)
VMworld 2010 Customer Presentation (Craig Stewart/Martin Currie)
The theme of the meeting will be "VMworld 2010 – Looking Back/Looking Forward." Come to network and share ideas with an innovative group of VMware users as you learn how to get the most out of your attendance at VMworld EMEA, and to get a brief overview of the highlights that will by then have been covered at VMworld US.
I was lucky enough last week to be involved in a Gestalt IT conference call with Symantec. The conference call was designed to give us all a sneak preview of what Symantec were planning to announce at VMworld 2010 in a couple of weeks. Unfortunately it was under embargo, that is until today!
There were a couple of announcements being made, Symantec introduced a new NFS storage product called VirtualStore and made some further announcements about NetBackup 7 and new VMware specific features. However the most interesting announcement on the call for me was the release of Symantec Application HA for VMware.
Symantec have been looking at why customers are not going “the last mile” with virtualisation. Why are customers not deploying their Tier 1 applications on their virtual platforms? Symantec’s view on this was that customers still have issues with application level failure within guest VM’s. This product has been designed to fill that void and at present is a product with no real competitors.
As the call progressed the current HA options were described by Symantec and discussed by the group. The obvious one is VMware HA which covers a physical host failure event. Within the VMware HA product there is also VM monitoring which covers you in the event of an OS level failure event, such as a blue screen. Then you can of course employ other technologies such as OS level clustering, however you then have to take heed of caveats that hinder the ability to use features such as vMotion and DRS.
I’m always sceptical when I see new virtualisation products, one of my fears is that companies are attempting to just jump on the crest of the wave that is virtualisation. Symantec are obviously a bit more established than your average company, but as always the jury is out until we see a final product doing the business for real. It transpired during the call that the product is actually based on Symantec Veritas Cluster Server, a product with a long history in application availability.
Veritas Cluster Server has a lot of in built trigger scenarios for common products such as Microsoft SQL Server, Exchange Server and IIS. On top of this built in, out of the box support Symantec also have a VCS development kit allowing for custom scenarios to be written. I like this approach, it reminds me of F5 Networks use of the customer community to support the writing of custom rules and features for their product. If a custom rule or feature has enough demand then they spend the time developing it into their product range. Perhaps Symantec could look at leveraging their customer base and community in this way and improve the support around VCS trigger scenarios. One other potential use of the VCS SDK that springs to mind is for application vendors who are making specialist software, CRM, ERP, Finance systems, etc. They could look to build in Application HA into pre-configured virtual appliances, that would be a great selling point for any software vendor.
The deployment of the product itself takes the form of a guest deployment / agent. Technical deep dive information on the exact integration between the Symantec product and VMware was thin on the ground. However there was mention of Symantec’s integration with the VMware HA API, something that I don’t think has been announced by VMware just yet. The description given to us during the call was that if Symantec Application HA failed to restart the application it could send a downstream API call to VMware HA and ask it to restart the VM’s Operating System. An interesting concept, something I am sure we’ll hear more about at VMworld.
Licensing for this new product is quite competitive, $350 per virtual machine, a small price to pay for ensuring your Tier 1 application recovery is automated. Symantec have promised full integration with vCenter Server and the screenshot below shows Symantec Application HA in action monitoring a SQL 2008 server, click on the thumbnail to see a full size image.
If you would like to learn more about Application HA, then get along to VMware and Symantec’s break out session at VMworld. - http://www.vmworld.com/docs/DOC-4658
Alternatively you can listen to a Podcast from Symantec’s Niraj Zaveri discussing the new product. - http://www.symantec.com/podcasts/detail.jsp?podid=ent_application_ha