Customer had all his VMware databases for vCenter Server, Update Manager, Single Sign On (SSO), vCloud Director and vCenter Chargeback running on one big SQL Server where they shared resources with other databases. They asked me to move the databases to a new MS SQL Server because the load of the VMware databases was more than expected. I prepared the move by reading a few VMware KB’s that described how to move the databases, but still experienced some issues, that is why I wrote this blogpost to have a good manual for the next time I have to move these databases.
This first post shows how to move the SSO database.
Move the SSO database
Before the actual move, make sure that you’ve read the following KB: Updating the vCenter Single Sign On server database configuration (2045528). Since we also have to move the database, which is not described in the mentioned KB article, there are some extra steps to perform. This post only works for MS SQL Server.
- Make a connection to the old database server
- On the SSO server, stop the Single Sign On service.
- On the old database server make a backup of the SSO database (default name is RSA).
- After a successful backup, set the SSO database to Offline
- Copy the backup to the new database server
- On the new database server, import the SSO database
- On the security section of the SSO database, check if the user RSA_USER is present.
After this, the SSO database is available, but there is no login user connected. Usually when you create a SQL user, you also make a mapping to a database which then automatically creates the database user. In our case the database user is already present but is not yet mapped to a SQL user. That’s what we’ll do now. First let’s make sure the RSA_USER is not yet mapped:
- Run the following sql query against the SSO database to show all unmapped users of the database: sp_change_users_login report
- Now create a new SQL User (SQL Authentication) at the SQL Server level not at database level. Name this user RSA_USER and use the same password the database RSA_USER has. Set the default database to RSA (the SSO database).
- Run the following sql query against the SSO database to map the user RSA_USER (server level) to the RSA_USER (database level): sp_change_users_login ‘update_one’, ‘RSA_USER’, ‘RSA_USER’
- To check if things worked out, rerun the query against the SSO database. The RSA_USER should not show up: sp_change_users_login report
When running the queries, make sure you run them against the correct database. See image.
Next step is to create the RSA_DBA user which is only a SQL Server user and not a database user. But this user should become the owner of the SSO database.
- At SQL Server level create the SQL user RSA_DBA and be sure to use the same password you previously used. (Well, you can always reset it later on).
- After the RSA_DBA user has been created, open the properties of the user and now set this user as the owner of the SSO database
Your database is now ready for use. We just have to tell the SSO Server that it should look somewhere else from now on. To do this run the following on the SSO Server:
- Go to the ssoserver\utils folder (usually: C:\Program Files\VMware\Infrastructure\SSOServer) . Run the command: ssocli configure-riat -a configure-db –database-host new_host_name
- You will be prompted for a password, this is the master password you also used to login to SSO with the user: admin@system-domain or root@system-domain.
- Check the jndi.properties file for the correct database reference. You can find it in C:\Program Files\VMware\Infrastructure\SSOServer\webapps\ims\web-inf\classes\
- Edit the file C:\Program Files\VMware\Infrastructure\SSOServer\webapps\lookupservice\WEB-INF\classes\config.properties and modify all values that need to be updated.
- To update the SSO DB RSA_USER password, run the command if, for example, the RSA_USER password has expired or the Database has been moved to another SQL instance: ssocli.cmd configure-riat -a configure-db –rsa-user-password new_db_password –rsa-user New_RSA_USER
Now cross your fingers and start the SSO Service. Check if you can logon to the SSO Server: https://<your SSO server>:9443/vsphere-client/ Login with: admin@system-domain
See full post at: How to move VMware Single Sign On (SSO) database
When working on a design in which there would be two physical datacenters, I started to rethink about why I would create multiple datacenters inside vCenter Server. I noticed that almost all example designs I look at that cover two physical datacenters, also have two datacenters in the vCenter Server design. But why?
When searching for an answer in the vSphere 5.1 documentation, the only reason mentioned to create a datacenter is “You can create multiple datacenters to organize sets of environments.“. But it also states that creating datacenters creates limits: “Inventory objects can interact within a datacenter, but interaction across datacenters is limited. For example, you can hot migrate virtual machines from one host to another host in the same datacenter, but not from a host in one datacenter to a host in a different datacenter.“
For organising objects in vCenter Server, I can also use folders. I can put a datacenter in a folder, I can put a cluster in a folder and I can set permissions at folder level. In other words, creating the logical design in vCenter Server to match the physical design, can also be done using folders, but without the limitations.
My basic statement is: “When I have two physical datacenters, I should not by default create two logical datacenters in vCenter Server”.
Yes, there are reasons to do create two datacenters, but those are very limited. When I asked the question on twitter I got quite a few responses and only one really demanded two logical datacenters, when creating a DMZ and you want to make sure VMs don’t travel between the two datacenters.
Most other cases in which people created two datacenters could also be matched by using folders.
I’m not saying that you shouldn’t be using datacenters, but I just want to see if my statement is correct and that you shouldn’t just blindly create multiple datacenters and thereby limiting functionality like Shared-Nothing Storage VMotion.
Would love to see your comments on that.
Update: Forbes Guthrie wrote a blogpost in response to my question. In his post he shows the do’s and don’ts for the vCenter Server datacenter: Why use the Datacenter object in vCenter?
See full post at: Design question: Why vCenter Server Datacenter?
Hope you enjoyed the previous series on VMware vCloud Director networking. In this post I will show you how to make the vCloud Cells available through a Load Balancer using the vShield Edge. By publishing two or more vCloud Cells behind a load balancer, it is easier to separate management network from user network and you make the vCloud more redundant because you can shutdown a cell without bringing down the whole vCloud. Do keep in mind that a user session will get disconnected when you shutdown a cell, but the user can immediately reconnect and will be redirected.
To better understand the configuration we need a design. Below you’ll see the design in which the ESXi hosts and the vCenter Server are in the management network (172.10.0.x range). The user network is in the 192.168.0.x range and all traffic goes over the user network to the internet. The real situation this design is based upon is a little different, since the management network is routed to the internet over another network, but that would only complicate the example.
Preparing the vCloud cells
Before we start configuring the load balancer, something have to be prepared on the vCloud cells.
- The first step is to make sure the certificates are the SAME on both cells. If you generated a self-signed certificate on each cell during install, you will have to change this.
- The second step is to make sure that the vCloud cells are properly syncing time.
I have been fighting an issue where the console proxy session (VMware Remote Console) would disconnect when two vCloud cells where behind the load balancer. If only one vCloud cells was running, then everything worked fine. Starting the second vCloud cell behind the load balancer would work well for the HTTP sessions, but the VMRC (VMware Remote Console) session would always disconnect. Didn’t matter which vCloud cell I would shut down, as long as just one was running the VMRC would work fine. This turned out to be an issue with the cells running a 2min time difference.
Installing same certificate on both vCloud cells
During installation of the vCloud cells you had to generate a certificate. In my case I put my certificates in a file named /etc/certificates.ks. Assuming the certificate on vCloud cell is ok, we’re now going to copy the certificate to the second vCloud cell. Let’s first check the certificates currently on both the vCloud cells. Open a browser and go to both the vCloud cells websites. You’ll see the security certificate icon telling you that this website uses a certificate. In my case I’m using firefox and when opening the certificate information, I see the following. Notice how the thumbprints are different.
- Logon through SSH on the first vCloud cell, make sure you are root or if you want to do it really safe use sudo
- Copy the certificates from this vCloud cell to a location where you can access it from the second vCloud cell. In my case I copy it to the data transfer directory. I have a special directory created there that holds the installation bin files of vCloud Director, the responses file and the certificates. Copy:
cp /etc/certificates.ks /opt/vmware/vcloud-director/data/transfer/install/
- Make sure you have the responses file from the first installation in a location that you can access from the second vCloud cell, but actually I would have expected that you already have this since you already installed a second cell. Just to be sure:
cp /opt/vmware/vcloud-director/etc/responses.properties /opt/vmware/vcloud-director/data/transfer/install/
- Logon to the second vCloud cell and switch to root (leave the session on the first open because we’ll return here later on for the time sync)
- Stop the vCloud services on the second vCloud cell:
service vmware-vcd stop
- Copy the certificates file from the first vCloud cell to the location of the certificates file of the second vCloud cell. In my case that is /etc/certificates.ks. Copy:
cp /opt/vmware/vcloud-director/data/transfer/install/certificates.ks /etc/certificates.ks (yes to overwrite).
- To active the certificate, you need to rerun the configuration of vCloud Director. By using the responses file, you don’t have to answer all the database questions. In my case I need to run this to reconfigure:
/opt/vmware/vcloud-director/bin/configure -r /opt/vmware/vcloud-director/data/transfer/install/responses.properties
- After configuration is complete answer YES to have the configuration script start the vCloud service or start it yourself using:
service vmware-vcd start
- Monitor the startup of the vCloud cell and wait for “Complete. Server is ready”. Monitor:
tail -f /opt/vmware/vcloud-director/logs/cell.log
Let’s check if the certificates on the second vCloud cell indeed has changed by browsing to the second vCloud cell. Open the information of the certificate in your browser and see how the thumbprints match. Fase1 completed.
Time sync of the vCloud cells
It seems that having a time drift between the two vCloud cells of more than 2 minutes will cause disconnects of the console proxy session. Since the ntpd is NOT running in a standard RedHat install, we need to make sure it is, it will at start up and that the time is synced. The ntpd depends on the /etc/ntp.conf file which is already configured correctly by default if your vCloud cell is allowed to connect to the internet. In the next steps we’re going to start ntp, make sure it will run at startup and force a time sync:
service ntpd start
chkconfig --level 345 ntpd on
ntpdate -u clock.redhat.com
- watch date
Run the above on both cells and to see if time is in sync, it will run the date command and refresh every 2sec. Stop by hitting ctrl+c. Fase 2 completed. Now we can finally start with configuring the load balancer.
vShield Edge Load Balancer
To create the vShield Edge Load Balancer, we need to go to the vShield Manager. Logon to the vShield Manager and go on the lefthand side to the datacenters section and select your datacenter, not the cluster. On the righthand side click “Network virtualisation” and “Edges”.
Before we start configuring the vShield Edge Load Balancer, make sure you have all the IPs noted down that you need. The vShield Edge Load Balancer will get two IP addresses in the user network, one for HTTP, one for ConsoleProxy. On the management network only one IP is needed which can connect to the vCloud cells.
Now press the plus sign to create a new Edge device. The wizard will pop up and will ask for the name of the Edge device. For the name I entered: “vCloud-LoadBalance“. The hostname is “vcloud.domain.com“, this is the outside FQDN. You can leave the tenant field empty and I didn’t enable HA. Press next and you’ll be asked to enter the credentials for this edge device. If you want to use the same credentials you use for the vShield Manager, just leave it like it is. Click next. In this screen you can select the Appliance size, choose Compact and you can specify where the appliance should be place. Click on the little plus to configure placement. Select the Cluster/Resource pool and the datastore. I choose the “System vDC” resource pool because I feel that it belongs there. If you have better suggestions let me know. Click SAVE and then next.
In this next step we will be configuring the interfaces of the Edge device. Click the plus sign to add a new interface.
The first interface will be the external interface. On this interface we will connect two IP addresses, the HTTP (192.168.0.1) and ConsoleProxy address (192.168.0.2). Click the plus sign to add a new interface. Name it: “external“, select the PortGroup you want it to connect to. Make sure the connectivity status is “connected” and then click the plus sign to add IP addresses. Enter the IP addresses and the subnet mask. In our case: 192.168.0.1 (primary) and 192.168.0.2. Subnet mask 255.255.255.0. Save the IP settings and you will return to the first wizard. Leave Mac addresses empty, adjust MTU if needed, leave Enable Proxy ARP and Send ICMP Redirect empty. Click ADD. Now add the second interface which will be connected to the internal address 126.96.36.199 in the same way you did for the external interface.
Click Next in the wizard and you will get the “Default Gateway” screen. Enable “Configure default gateway”, select the external nic and enter the correct gateway address. In my case that would be 192.168.0.254. Change the MTU if needed. Again click Next.
Now the firewall & HA screen will ask to enable the firewall. Since this is a device that could be connected to the big bad world, enable the firewall policy and set the default to “Deny”. Click next to continue and finish to close the wizard. When you look at your vCenter Server you should be able to see the deployment of an OVF file.
When done you should see the following in the vShield Manager.
The next step is to configure the real load balancing. First we need a “Pool” of servers that offer the same functionality. In our case the Pool of servers will hold all vCloud cells. Second we then need a virtual server. The virtual server is what is published to the outside world.
Click the “vCloud-LoadBalance” we’ve just created and click “Actions” -> “Manage”. Here you’ll see an overview of the configured settings. Click “Load Balancer” and be sure that “Pools” is selected. Click the plus sign to start the “Add Pool” wizard. The name will be “vCloud_HTTP” (no spaces allowed). Next we configure the services. Enable HTTP and HTTPS and choose the balancing method “Least connections“. Leave the ports unchanged at 80 and 443.Click next. Now the Health Check screen appears. Only select HTTP (although I think HTTPS should work too) and for the URI for HTTP Service use: “/cloud/server_status“. Be sure to start with a forwardslash. Click Next.
Now we need to enter each member of the pool. Be aware that we are only defining the HTTP part of the vCloud cell, so we add only the HTTP addresses of the vCloud cells for this pool. In our case add two IP addresses: 188.8.131.52 and 184.108.40.206. Add them both with a weight of 1. Leave the defaults of HTTP and HTTPS.
Next we’ll add the Console Proxy servers to the pool. Run the same wizard again and name it “vCloud_Console“. In the Services screen you have to pay a little attention, you only need TCP over port 443, not HTTPS !!! This is because the console proxy traffic travels over port 443 but it is no HTTPS traffic. In other words, on the services screen select TCP and change the port to 443 and select the “Least Connections” balancing method. In the Health Check screen select TCP and monitor port 443, leave the URI for HTTP Service empty. In the next screen we again need to add some IP addresses. Enter the IP addresses of the console proxy of the vCloud Cells ( 220.127.116.11 / 18.104.22.168). You now return to the vShield Manager screen. A little above the list of pools you see the “ENABLE” button, click this to enable the pool and push the “Publish Changes” button.
We now have a pool of servers defined that can accept the traffic from the load balancer. Now we need to define the services that the load balancer will allow from the outside.
Click “Virtual Servers” and then the plug sign to start the “Add Virtual Server” wizard. You are now prompted to enter a name for this virtual server, use “vSrv_vCloud_HTTP“. Now enter the IP address of the outside HTTP interface 192.168.0.1. Select the pool “vCloud_HTTP” and enable HTTP and HTTPS. Leave the defaults of persistence method cookie for HTTP and SSL_SESSION_ID for HTTPS. Click Add.
Again click plus to add a new virtual server. Name this server “vSrv_vCloud_Console“. Now enter the IP address of the outside HTTP interface 192.168.0.2. Select the pool “vCloud_Console” and enable TCP and change the port to 443. Click Add. Click Publish Changes.
That’s it. Your vCloud Edge Load Balancer is ready.
More posts on vCloud Director Networking can be found here:
- VMware vCloud 5.1 Networking for dummies part 1 – External access for a vCloud VM
- VMware vCloud 5.1 Networking for dummies part 2 - Connecting to vApp networks and giving external access
- VMware vCloud 5.1 Networking for dummies part 3 – Publish a vCloud deployed VM to the outside world
See full post at: VMware vCloud 5.1 Networking part 4 Cell Load balancing
Today Mike Laverick drew my attention to his blogpost about BareMetalCloud. He writes about the possibilities to run your virtual ESXi host on their cloud offering. And best of all, by using a promocode, something like MikeLaverickIsAnAwesomeDudeAndThisWillGiveMeCloudForFree or maybe a shorter version, you can get free access.
Read all about it on Mike’s blogpost: Bare Metal Cloud for your Auto-Lab.
See full post at: Bare Metal Cloud for your Auto-Lab by Mike
I don’t often publish news bulletins and especially not about products that haven’t been released yet, but today I made an exception. In my mailbox I found Veeam’s announcement of Veeam Backup & Replication v7. The announcement is about the ability to back-up vCloud Director VMs including the vApp, metadata and attributes.
This is great news and something I’ve been wanting since my first vCloud Director project. When designing a vCloud Director environment for my customer I was surprised that there was no way to backup the intelligence of the vCloud which is the vApp including all the NAT & Firewall rules and networking that have been created. The only way for backups of vCloud VMs was to just backup VMs through vCenter and if you needed to restore them, you would just restore the VM in a different location and then bring it into the vCloud through an import.
Veeam also announced the possibility to restore your vApps and VMs. Because Veeam also makes a backup of the metadata the integration is much further than just a simple backup of a group of VMs. Nice !!!
So MY assumption is that it will now be able to restore vApps including NAT & Firewall rules, networking and maybe even restore into a different vCloud. Unfortunately I can’t get any confirmation on this, but knowing Veeam in the past years, I don’t think they will deliver only half a solution.
Of course…. the product hasn’t been released yet, but I do keep my fingers crossed!!! Follow this link to see all new features published in the next few months: Go.veeam.com/v7
Below the content of the e-mail:
Veeam is extending its support for VMware vCloud Director (vCD).
NEW integration with vCD will extend Veeam support for vCD, which currently includes the ability to backup and restore virtual machines (VMs) managed by vCD. Using the vCD API, Veeam Backup & Replication v7 will also:
- Display the vCD infrastructure directly in Veeam Backup & Replication
- Backup all vApp metadata and attributes
- Restore vApps and VMs directly to vCD
- Support restore of fast-provisioned VMs—and more!
Sound exciting? Learn more about upcoming updates here!
See full post at: Veeam first to offer real vCloud Backup?
It’s that time of year again, Eric Siebert launched the new vLaunchpad top VMware/virtualization blogs survey and is asking you to vote for the best blogs on virtualization in 2013. Go to: http://www.surveygizmo.com/s3/1165270/Top-vBlog-2013 and select your top 10 favourite blogs.
This year my blogging was mostly about getting to know vSphere 5.1 and my first steps in to the world of VMware vCloud Director. With my vCloud Directory post I’ve written a lot about how vCloud Director networking isn’t always as straight forward as you think. Well at first it doesn’t look like it is easy but once you “get it”, you’ll learn to love it. These posts on VMware vCloud networking for dummies got a lot of attention this year:
- VMware vCloud 5.1 Networking for dummies part 1 - External access for a vCloud VM
- VMware vCloud 5.1 Networking for dummies part 2 - Connecting to vApp networks and giving external access
- VMware vCloud 5.1 Networking for dummies part 3 - Publish a vCloud deployed VM to the outside world
Ive also stumbled a few times on bigger and smaller issues with vCloud. Have a look at these posts:
- VMware vCloud error: Batch update returned unexpected row - About very vague error messages
- VMware vCloud 5.1 network troubleshooting - Tips on checking your firewall and NAT rules
- VMware vCloud Transfer spooling area is not writable - Strange issue I encountered when installing a second vCloud Director cell
- Change IP address of VMware vCloud Cell - When you need to change the IP of your vCloud Director cell.
Now please vote at: http://www.surveygizmo.com/s3/1165270/Top-vBlog-2013
Thank you for reading my blog.
See full post at: Voting for the top virtualisation blogs of 2013
Oh those little things in life that sometimes cost you a whole day before you figured out that you we’re doing it all wrong. This was one of those days where I got alerted by some colleagues about vCloud Director reporting “Disk space red threshold crossed.”
When looking at the “Datastores” in the “Manage & Monitor” section, I noticed that there was still quite some free space and I had only added more space yesterday. When checking the properties of each datastore, I noticed the “threshold section” and I remember changing them on the newly added storage and just to be sure, yesterday I also set them for the existing data stores because I had left them at default before.
*light bulb moment* When looking at those thresholds, I noticed that you can read them in two ways.
The way I read this, was that the red threshold would give an alarm when xx GB was in use and a yellow threshold when yy GB was in use. So I set them to be very close to the Total capacity. But that turns out to be quite the other way around. These thresholds are talking about how much free space should be at least left. In other words I should have set them as low as possible in my case. When entering the values there is no check that tells you that red should always be smaller than yellow, in fact you set switch these values around as you like.
Well, I’ll never forget again, hope you will neither.
See full post at: vCloud disk space thresholds crossed
(Shamelessly copied from Yellow-Bricks.com)
Would like to hear more about Software Defined Datacenters from experts like Frank Denneman, Mike Laverick, Cormac Hogan, Kamau Wanguhu and many others? VMware and IBM are organizing an awesome event in the Benelux. Yes this is a full day event, and it is free for everyone, if you just want to sign up… go here. If you need to be convinced keep reading as there are some awesome sessions scheduled.
09.00 - 09.30 Registration
09.30 - 09.45 Welcome
09.45 - 10.30 Keynote VMware: Software-Defined Data Center
10.30 - 11.15 Keynote IBM: Converged Systems: beyond NextGen DC’s
11.15 - 11.30 Break and split into parallel sessions
11.30 - 12.15 Parallel track 1 or meet the expert
12.15 - 13.00 Lunch
13.00 - 13.45 Parallel track 2 or meet the expert
14.00 - 14.45 Parallel track 3 or meet the expert
15.00 - 15.45 Parallel track 4 or meet the expert
16.00 - 16.45 Parallel track 5 or meet the expert
16.45 - 17.30 Networking drink
The awesome part is that at this event you will also have the ability to sit down with one of the experts for a 1:1 discussion and get your questions answered. Below is the list of people you can sit down with, make sure to register for that!
Frank Denneman – Resource Management Expert
Cormac Hogan – Storage Expert
Kamau Wanguhu – Software Defined Networking Expert
Mike Laverick – Cloud Infrastructure Expert
Ton Hermes – End User Computing Expert
Tikiri Wanduragala – IBM PureSystems Expert
Dennis Lauwers – Converged Systems Expert
Geordy Korte – Software Defined Networking Expert
Andreas Groth – End User Computing Expert
So if you live in The Netherlands, Belgium or Luxemburg… make sure to sign up. As mentioned, it is a free event. And with people like Cormac Hogan, Frank Denneman, Mike Laverick and Kamau Wanguhu you know it is going to get deep technical.
- 5th March – Amsterdam
- 7th March – Brussels
- 8th March – Luxemburg
See full post at: Software Defined Datacenter Roadshow – Benelux – Free Event!
I’m still getting used to the important part vCenter SSO (Single Sign-On) is playing in vSphere 5.1. In my home lab I was switching domain controllers from W2k8 to Windows 2012. Transferred FSMO roles, integrated DNS, changed IP addresses for DNS on all servers and all seemed fine. My w2k8-dom01 server was demoted and removed. Few days later when trying to make a vCenter connection, I couldn’t logon anymore. As a good Windows Admin I of course first rebooted the vCenter VM but (as all real admins know) that seldom fixes the issue. Diving in the vCenter log files at “C:\ProgramData\VMware\VMware VirtualCenter\Logs” I found the following error:
2013-01-27T10:46:20.302+01:00 [04304 info '[SSO][SsoAdminFacadeImpl]‘] [FindPersonUser] 2013-01-27T10:46:20.427+01:00 [04304 warning 'Default'] Warning, existence of user “VANZANTEN\Administrator” unknown, permission may not be effective until it is resolved. 2013-01-27T10:46:20.427+01:00 [04304 error 'Default'] The user account “VANZANTEN\Administrator” could not be successfully resolved. Check network connectivity to domain controllers and domain membership. Users may not be able to log in until connectivity is restored. 2013-01-27T10:46:20.427+01:00 [04304 error 'Default'] [ACL] Adding unresolved permission for user “VANZANTEN\Administrator” 2013-01-27T10:46:20.427+01:00 [04304 info '[SSO]‘] [UserDirectorySso] GetUserInfo(VANZANTEN\vSphereAdminGroup, true) 2013-01-27T10:46:20.427+01:00 [04304 info '[SSO][SsoAdminFacadeImpl]‘] [FindGroup] 2013-01-27T10:46:20.536+01:00 [04304 warning 'Default'] Warning, existence of group “VANZANTEN\vSphereAdminGroup” unknown, permission may not be effective until it is resolved. 2013-01-27T10:46:20.536+01:00 [04304 error 'Default'] The group account “VANZANTEN\vSphereAdminGroup” could not be successfully resolved. Check network connectivity to domain controllers and domain membership. Users may not be able to log in until connectivity is restored.
That is when the lightbulb in my head went on (hey it was Sunday morning and I didn’t have my coffee yet). I figured the vCenter SSO (Single Sign-On) service was still trying to talk to the old domain controller for authentication, so I opened the administration page of SSO: https://<your SSO server>:9443/vsphere-client/ Login with: admin@system-domain (or root@system-domain for the vCenter Server Appliance). You did write down that password, did you? Go to: “Sign-On and Discovery” -> Configuration and look at the identity sources. There it is, the reference to the old domain controller.
Change vCenter SSO domain controller
Changing it is easy, click “Edit identity source” and change the name of the new domain controller. Test your settings and you would expect to be done then.
Unfortunately I received the following error: ”The edit identity source operation failed for the entity with the following error message. Cannot connect to one or more of the provided external server URLs: w2k8-dom01.vanzanten.local:389″.
Strangely enough it is still pointing at the old domain controller. The easiest way to solve this was to just delete the entry and create a new one with the new domain controller in. Just to be sure I restarted the vCenter SSO (Single Sign-On) service first and then vCenter Server would start without any issues.
In the comments on this post, Jad (JM) wrote that you can also use the domain name when entering the primary server URL. In my case I would NOT enter: “ldap://w12-dc01.vanzanten.local” as my Primary URL but just “ldap://vanzanten.local”. vCenter SSO will then query the domain for the special domain controller DNS record and use this to find the domain controller to talk to. Btw, this doesn’t work if you already have entered two URLs and want to change them and make the second one empty. Think it is purely a GUI issue. When I empty the second URL, save it and then edit to check again, it keeps the old entry. When you delete the whole entry and create a new one that has an empty secondary URL, you’re fine. Thanks Jad!
See full post at: vCenter SSO changes when demoting domain controller
In my last VMware vCloud Director project I ran into an issue that I want to share with you. Initially the design was to create two OrgVDC to use different allocation models, but the resources would be almost the same because it was only a fairly small environment. I created one Edge Gateway and two OrgVDCs connected to this Edge Gateway, with a shared Org network. The first OrgVDC was meant to be for real performance testing and would get FC Storage, while the second orgVDC would host most VMs and was only for Development and testing.
Because of some changes in the final design, I didn’t need the first OrgVDC anymore and I tried to delete this, but this was impossible. When trying to delete it, the well known error: ‘The vCloud Org cannot be deleted because it is in use”. Checking all vApps, vApp Templates, catalogs etc, but there was nothing still connected to this OrgVDC except of course for the shared Organizational network and the shared Edge.
This is when I first noticed that the second OrgVDC didn’t show that the Edge was connected to it. See the screenshots.
The only way to delete the OrgVDC is to first delete the OrgNetwork the OrgVDC is connected to. It seems that somewhere inside vCloud the direct connection to the first OrgVDC with the OrgNetwork and the Edge remains the primary connection that can never be deleted without deleting the OrgNetwork. Creating a new OrgVDC that uses the shared OrgNet, doesn’t “transfer” the OrgNet to the second OrgVDC.
Seems logical, but I expected that somehow you could transfer this shared OrgNetwork and connect it to a new OrgNetwork. This does mean that any extra OrgVDCs you build on a shared OrgNet always stay dependent on the first OrgVDC that “created” the shared OrgNet. Good thing to remember in your next design.
See full post at: vCloud can’t delete Organizational VDC
When creating NAT and firewall rules in a vCloud environment, not everything works out directly the way you had planned. This post will show you how to use Splunk to collect sys logs from the vCloud environment and make it an easy to use tool when performing some network troubleshooting.
Splunk comes in a number of flavours like Windows, OSX, Linux. In my environment I have a RedHat based NFS server that is doing almost nothing and will now be promoted to be my Splunk syslog server. Start by downloading the free Splunk version from www.splunk.com. You’ll need to create an account and then go to the Linux RPM (because it is RedHat). My RedHat server is without GUI, so I first tried the download link in a wget command, but that didn’t work since there are some scripts behind the download link. I was very pleasantly surprised though that the Splunk website offers a special wget link. Excellent! After downloading the RPM you can simply install it with “rpm -i <download rpm>”. Open the webpage of your Splunk server at port 8000 and walk through the brief configuration wizard.
To add sys logging as a service (called an App in Splunk), go to: App -> Home (upper right corner). Click “Add Data” and choose “syslog” from the list, then select “consume syslog over UDP”. In the next screen enter UDP port 514, leave “Source name override” blank and select source type “syslog” from the list, click SAVE. Your syslog server is ready to receive data.
Enabling syslogging in vCloud Director
In the vCloud interface make sure the loggings are sent to the syslog server. To set the IP address of the syslog server go to the system tab in vCloud Director, select Administration, General and in the networking section you can now enter the IP address of the syslog server. Press apply to save.
When building firewall rules you can choose to log traffic per rule. This is very handy to exclude a lot of traffic of rules that already work fine. When creating a new rule I would advise to enable logging for that rule and (for a short time) also enable logging for the bottom rule, which is usually the rule that denies everything. By enabling logging on this last rule you can see that packets that don’t meet any of the previous rules are dropped. This often helps you understand why those packets are dropped.
After sys logging is enabled in vCloud Director and you have enabled logging at the firewall level, it is now time to show how Splunk can help you troubleshoot.
Find the syslog data
The first time it took me quite some time to find the correct view / report and I still have to try a few times before I get the right report, but luckily you can save a search once you found the correct one and access it very quickly. Open the Splunk web interface again and select “Manager” in the upper right corner. Here you’ll see a page with lots of options, choose “Searches and reports”. At the top of this page, select “All” for the App Context. In the list below find the item “Messages by minute last 3 hours” and select “Run” at the end of the item. A new view will open and you can now see a lot of matching events in a graph. At the top of the page select “Dashboards & Views” and “Summary”. And finally, there is the “UDP:514″ option we need, Click it. A new page will open and show you the latest events coming from your Edge firewall. To make it easier the next time you want to access this report, press the “SAVE” button and give this report a name. Now you can easily access this report from “Manager” – “Searches and Reports”.
After setting up Splunk, configuring vCloud Director and finding were the reports are hidden, we can now finally do some troubleshooting. I have created a number of firewall rules which are working nicely and I’m just about to add a new rule that I want to test: RDP to desktop1 with a Source address that is external and a destination of 192.168.0.45. See how I enabled logging for this rule. Also notice the last rule in the background on which I also enabled logging.
After the rule is created, go to the Splunk interface and open the syslog search page and generate some traffic that should be handled by the newly created rule. For this example I created a rule that I know isn’t going to work, just for demonstration purpose. Hit refresh and you can now see on the search page that traffic has been captured, showing you the packets were dropped. Notice the text: “DROP_131073″. This is a clear clue on WHY the packets were dropped.
To understand the 131073, you will have to open the vShield Manager in your vCloud Environment. After logon to the vShield manager web interface select the “Edges” view, then select “Edge Gateways”, in the middle pane select your Edge device and click “Actions” – “Manage” – “Firewall”. In this view you can see the firewall rules that are generated automatically by the Edge device and your manually created rules. Note that the “Rule Tag” number is equal to the number you see in the vCloud interface. In this case the DROP_131073 tells us that the packets were dropped because of the default any/any rule which denies all traffic. (Click on the image to enlarge)
Not only the DROP message is important. To understand why your traffic is blocked and try to create a working rule, you’ll also need to check:
|SRC||Source IP address|
|DST||Destination IP address|
|PROTO||Protocol (TCP, UDP, IGMP)|
If you look at the syslog and now understand why your rule failed, you can easily edit the rule and check if it is working now. What you’re hoping for is of course the “ACCEPT” message in the logs. In my case it shows “ACCEPT_6″ because custom rule number 6 was the match to accept these packets:
When you’re done troubleshooting, don’t forget to disable logging on the rules where it is no longer needed. Have fun !!!
See full post at: VMware vCloud 5.1 network troubleshooting
In November 2009 I already did a review on StorMagic’s SvSAN and was then very pleased with the product. Now, three years later I’m again reviewing SvSAN and am anxious to see if in those three years SvSAN has improved.
For those not familiar with StorMagic SvSAN I will shortly explain what SvSAN does. With SvSAN you will be able to use the local disks of one or two ESXi hosts and present that local storage as shared storage to your ESXi environment. By mirroring the storage between two ESXi hosts, SvSAN is protected against a single host failure. The following diagram shows how on each ESXi host a SVA (Storage Virtual Appliance) is installed and how the local disks are connected to the SVA and then mirror and presented to the whole vSphere cluster over iSCSI. Should the ESXi host holding the SVA with the primary mirror fail, then the second SVA will present the datastore to the ESXi hosts in the cluster.
What is a neutral storage host?
|Counter||Test 1||Test 2|
|Total IOs per second:||143.08||119.70|
|Total MBs per second:||1.17||0.98|
|Average IO Response time:||417.6953 ms||508.2921 ms|
|Maximum IO Response time:||883.2917 ms||39038.1664ms|
Comparison with VMware VSA
See full post at: [Review] Stormagic SvSAN 5