Changing Things

VMware has updated its certification names logos and logos, again. I guess nothing lives forever, nothing stays the same.

What was the VCP until September of last year was originally going to be the VCP-DV, is now the VCP-DCV. The VCP-DT is still the VCP-DT, but the master level certification, the VCDX, has become the VCDX-DCV. Logos have also been updated. “Data Center” is now two words instead of “Datacenter” because apparently that considered industry standard (I didn’t realize there was such a thing.)

Good thing I was waiting to order new business cards until after I could add a VCAP certification.

Oracle VM

Oracle VM 3 improved a lot, they are not close to Microsoft or VMware, but it is pretty good if you are not trying to do dramatic things like moving virtual machines around.

Gartner’s vice president and distinguished analyst Thomas Bittman, talking about how Oracle VM is poised to be the real competitor for VMware in the future. Not Hyper-V. Not Xen. I’m not one to really defend Microsoft or Citrix, but… have you ever actually seen Oracle VM running on a production system?

Using CDP

The other day I was tasked with adding a new VLAN to a customer’s vSphere cluster. The existing network configuration had just the default VM Network setup, with no trunks or tagged port groups setup. In this case the customer is in the process of adding a few virtual desktops (Citrix, blah) and wanted a separate DHCP scope for those machines.

In order to setup this VLAN I would need to put each host in maintenance mode, reconfigure the physical switch ports that were providing connectivity to that host from access ports to trunk ports, add tags to the existing VM Network and Service Console, and then provide connectivity to their new VLAN by adding a new port group tagged to that VLAN number.

(Note: if you need to trunk the connection the Service Console/Management Network uses, change the VLAN tag before you adjust the physical switch port settings. You’ll lose connectivity to the host temporarily until you change the switch port settings.)

I set about trying to determine where each of the physical NIC ports on the hosts were plugged into their core switch. There are a few options to do this:

  1. Hope that the customer has proper documentation of their environment, from the initial setup and any changes that were made, indicating the switch ports. In this case, the customer did not.
  2. Hope that the switch has comments that indicate what is physically connected to it. In this case, there were no comments.
  3. Physically trace out each connection back to the switches. In this case, we were in the middle of a major winter storm in Kansas City, so I was working remote for the customer.
  4. Use networking commands on the switch to attempt to identify what is plugged into each port.

You might expect that the MAC addresses of the vSwitch’s individual NICs would be listed in the results of a “show mac address-table dynamic” on the switch — except they aren’t. You can see the vNIC this way, but not the pNICs.

If you open the vCenter GUI and go to the Configuration > Networking section, next to each of the physical adapters configured in a vSwitch, you’ll see a blue box. Click on it, and if you’re using Cisco switches (and why wouldn’t you) you’ll see all the data about the switch, port, and configuration of the network port.

You’ll also get these results if you’re running on a UCS chassis against a Nexus switch, but in a slightly different format. With the UCS and other blade chassis type systems you can actually find other ways to determine the switch port you’re connected to, but that’s a topic for another blog post (and once I get more experience on the UCS.)

What if none of this works?

If all this doesn’t work for you, make sure you’re using Cisco switches. CDP is a proprietary protocol, so your Dell, HP, Juniper, 3Com, Netgear, Trendnet, SuperCheapNet switches are probably going to give you any of this data.

However, as of ESXi 5.0, VMware does support Link Layer Discovery Protocol (LLDP), which is the IEEE standardized version of CDP. The problem is they only support it with Distributed vSwitches, which requires Enterprise Plus licensing. A lot of the environments I work in either don’t have that licensing and/or have not adopted Distributed vSwitches. For reasons unknown, VMware does not support LLDP on regular vSwitches. (For more information on how to use LLDP check out Ivo Beerens’ post.)

If you’ve got Cisco equipment, but it’s still not working, make sure CDP is enabled on your hosts. As of ESX 3.5, it should be by default but it may have been disabled. For more information on how to troubleshoot this check out VMware KB1003885.

Host Memory

Memory utilization is important in VMware, most of the time it’s the most limiting factor in the virtual to physical consolidation ratio. Often times I’m tasked with assessing how upgradable a physical host’s current memory configuration is. It’s easy to see from the vSphere Client how much memory you have installed in a host, but when you’re upgrading you need to know exactly how that memory is laid our on your motherboard so you can get the most bang for your buck.

There are basically three ways to do this:

  1. Open up the case and see. This is going to require downtime (because you wouldn’t open the case while you’re running production systems, right?) This is all well and good because you can just vMotion your virtual machines to another host and shut it down. Problem is, if you’re having memory utilization issues, chances are you’re overcommitting on your hosts, so you’re going to need to shut down virtual machines to do this.
  2. Use an out-of band-management utility like DRAC or iLO. Great if your server has them configured, but a lot of people either don’t realize they have these or don’t bother to set them up until someone points out how useful they are. Usually to configure them requires a reboot of the host which means downtime, and I just explained why that’s probably not great in this situation.
  3. SSH into your hosts and run a couple of commands. This is what I’m going to explain how to do.

Everything I’m going to show you is documented from the VMware KB. If you’d rather refer to those go here for ESXi 4.x/5.x or go here for ESX 3.x/4.x. Make sure you know what version you’re checking, so you can use the right commands.

ESXi 4.x/5.x

The first thing you’ll need to do is enable SSH on your hosts. Best practice is to leave SSH off and only turn it on when you need it. You can enable it by opening up the vSphere Client, selecting the Host and Clusters view, and then selecting the host you want to enable SSH on in the left hand window. Select the Configuration tab, and then Security Profile from the options on the left. Under services you’ll see SSH. Click on Properties, select SSH from the list of services, and then press Options. In the window, press Start to enable the SSH service. Leave the settings that ask you about starting this service automatically set to manual. For security, you don’t want SSH turned on all the time. You’ll also get warnings from each host it’s enabled on if you leave it turned on. When we’re done you’ll want to come back here and disable SSH on your host. (Note: If you’ve previously closed port 22 on your ESXi firewall, you’ll need to open that back up. By default the port is open but the service is not running.)

At this point you need to SSH into your host as root. Keep in mind unless you joined your ESXi box to your Active Directory domain, you probably can’t just use your normal network account to get into the host this way. It’s going to be root or another local account you’ve created.

If you’re on Windows, I suggest using Putty. If you’re on a Mac or Linux box, no need to download anything extra as it’s all built in. Just open up Terminal and away you go.

(I’m normally a Mac user, but I access my work demo lab through a Windows 7 virtual machine running on VMware View. So here is the results from Putty.)

You’ll want to do is navigate to a location you can easily access through the vSphere datastore browser. The reason is we’re going to be running a command and outputting the results to a text file so we can easily get the information we want. I suggest using a local disk on the host, ISO/template datastore or maybe a shared datastore that you use for things like dumping host logs. The output file is going to just a few MBs, so it’s not really critical as long as it’s easily accessible. When we’re done we’re going to delete it from the host.

cd /vmfs/volumes/YOUR_DATASTORE

You’ll notice that the result for your command will change your current directory to something like this: /vmfs/volumes/4ea066d9-d9f09a90-c026–0025b5aa002c — This is normal. Do not be alarmed.

At this point we’re going to run the command that will query the system for all the physical hardware, and export it to a text file.

cim-diagnostic.sh > YOUR_SERVER_NAME.txt

You can call the file after the > whatever you want. Most of the time I keep it unique because I’m going to be doing this command on multiple systems and want to easily identify which one it came from.

At this point you can go back to the vSphere Client and open up the Datastore Browser on the datastore you ran the command on. You can get to this easily by clicking on the host in Host and Clusters and then under the Summary page, right clicking on the datastore listing and then Browse Datastore.

Use the Datastore Browser to download the file to your desktop. (Right click file > Download)

Now the problem with this file is that Notepad doesn’t know how to handle the way ESXi outputs the file, so when you open it up it looks a little something like this:

I would suggest opening the file in something like Notepad++ which is really far more useful and can read the log file correctly. It’s also helpful for other VMware logs that don’t save whitespace in a way Notepad likes. (Note, Mac users can open the file in TextEdit just fine.)

Run a search within the document and find the section that starts as Dumping instances of CIM_PhysicalMemory. You’ll see the first entry as Tag = 32.0 and if you scroll down all the way though the section it’ll go until run out of memory slots. For instance, the server I ran my export on is a Cisco UCS B250 with 46 memory slots, so the last entry will be 32.45.

The key bits of information here are things MaxMemorySpeed and Capacity if you’re trying to figure out what to buy. Capacity is listed in bytes, so 4294967296 is going to be a 4GB DIMM. There is also lots of other good information in the export such as the position of the DIMM on the motherboard, the node and channel the memory is utilized by, or if the slot is even in use, as well as things like serial numbers and part numbers.

At this point you can delete the file from the host, if you choose, either by utilizing the Datastore Browser or at the SSH session you may still have open.

rm YOUR_SERVER_NAME.txt

Now you can close your SSH session, and turn SSH back off on your host in the same section where you previously turned it on.

ESX 3.x/4.x

The method for obtaining this information on ESX is similar to the ESXi method explained above, the only real difference is that the command utilized is different and the output file isn’t as detailed (although it’s much easier to read.)

The first thing we’re going to need to do is enable SSH on the host. On ESX 3.x/4.x, SSH is disabled by default for the root account on an ESX host. The SSH service does not allow root logins. Non-root users are able to login with SSH, which you can then elevate this account to the root user. As an alternative to enabling SSH on your host, you can physically login to the console of the host and run the commands as well.

From VMware KB 8375637:

If you do not have any other users on the ESX host, you can create a new user by connecting directly to the ESX host with VMware Infrastructure (VI) or vSphere Client. Go to the Users & Groups tab, right-click on the Users list and select Add to open the Add New User dialog. Ensure that the Grant shell access to this user option is selected. These options are only available when connecting to the ESX host directly. They are not available if connecting to vCenter Server.

If you’re on Windows, I suggest using Putty. If you’re on a Mac or Linux box, no need to download anything extra as it’s all built in. Just open up Terminal and away you go.

(I’m normally a Mac user, but I access my work demo lab through a Windows 7 virtual machine running on VMware View. So here is the results from Putty.)

After logging in to your host with your regular user account we need to elevate to root user:

su -

You’ll be prompted for your root password. Enter it now.

You’ll want to do is navigate to a location you can easily access through the vSphere datastore browser. The reason is we’re going to be running a command and outputting the results to a text file so we can easily get the information we want. I suggest using a local disk on the host, ISO/template datastore or maybe a shared datastore that you use for things like dumping host logs. The output file is going to just a few MBs, so it’s not really critical as long as it’s easily accessible. When we’re done we’re going to delete it from the host.

cd /vmfs/volumes/YOUR_DATASTORE

You’ll notice that the result for your command will change your current directory to something like this: /vmfs/volumes/4ea066d9-d9f09a90-c026–0025b5aa002c — This is normal. Do not be alarmed.

At this point we’re going to run the command that will query the system for all the physical hardware, and export it to a text file.

smbiosDump > YOUR_SERVER_NAME.txt

You can call the file after the > whatever you want. Most of the time I keep it unique because I’m going to be doing this command on multiple systems and want to easily identify which one it came from.

At this point you can go back to the vSphere Client and open up the Datastore Browser on the datastore you ran the command on. You can get to this easily by clicking on the host in Host and Clusters and then under the Summary page, right clicking on the datastore listing and then Browse Datastore.

Use the Datastore Browser to download the file to your desktop. (Right click file > Download)

Run a search within the document and find the section that starts as Physical Memory Array. You should see a summary that lists how many slots the system has, as well as the maximum memory size. Then there will be an entry listed for each memory slot. For instance, on the Dell R710 I ran an export on, there were 18 slots for a maximum of 192GB. If there is memory installed in the slot you’ll see the size of the DIMM, otherwise you’ll see No Module Installed under size.

At this point you can delete the file from the host, if you choose, either by utilizing the Datastore Browser or at the SSH session you may still have open.

rm YOUR_SERVER_NAME.txt

Now you can close your SSH session.

Little Megabits

  1. You really should never use 100mb networking with VMware for much of anything. I’m not even sure 100mb networking has any place in a modern datacenter, except maybe cheap connectivity to something like an iLO/DRAC.
  2. You should avoid using a single vNIC for any vSwitch, unless you just don’t care about things like load balancing or network redundancy.
  3. Not seen in the image, but Service Console/Management Network should not be on the same vSwitch as your VM Network port group. Good luck accessing your ESX host when all the bandwidth on your 100mb connection is used up by virtual machine traffic.
  4. The particular host in question did not have any vMotion setup, and/because there was no shared storage for the hosts in the “cluster” — term used loosely.
  5. Any combination of the above is grounds for removal of virtualization privileges.

View Borked

I’ve been on a View 5.1 deployment with a customer all week, and part of the project involved deploying VMware vCenter Operations Manager (vCOPS) for View, version 1.01. I’ve done this a couple times before, and had no issues getting the Linux OVA base vApp configured. Then when I went to install the View adapter into a Windows VM, I got a strange message about how this installer was a 32-bit application and not able to run on a 64-bit system.

Two things wrong with this:

  1. Normally 32-bit apps run on 64-bit operating systems, unless they’re specifically configured not to.
  2. vCOPS for View is a 64-bit application, with a 64-bit installer. The system requirements state it can only run on Windows 2008 R2 or Windows 2003 R2 64-bit.

After playing around with the 1.01 installer, and attempting to download and start the installer for 1.0 just fine on the same system, I notice that the published file size on VMware.com is 22MB, but the 1.01 installer I was downloading was only 16MB. I ran an MD5 checksum on this file and it didn’t match the published checksum on the website either. The file creation date shows sometime in late December, while the published file date is somewhere in early October.

Eventually I was able to find a copy of a previously used 1.01 installer on another system, ran a checksum on it, and it matched the published checksum. Installed the adapter using this file and it worked just fine. Customer vCOPS environment is up and running.

I have a support case in with VMware right now letting them know about this issue, hopefully they get it corrected soon. I realize it’s not a particularly popular product compared to something like vSphere or even a View Connection Broker, but it’s hard to see how this could have gone on for a while (nearly a month) without someone else noticing?

TL;DR vCOPS for View 1.01 installer on vmware.com is screwed up, I’m working with VMware to get it fixed.

Fake SSD

Notes on tricking VMware into thinking a datastore is actually an SSD drive. Very useful if you’re in a lab environment and want to just test some of the features in vSphere 5 that center around flash storage (but don’t have the funds to dedicate to actually having it.)

However it’s also useful if you actually have flash storage in a production environment but for some reason vSphere isn’t recognizing that fact.

VMUG Conference

I used to go to a lot more of the Kansas City VMUG meetings back before I became a consultant (and had more control over my own schedule) but when I saw there would be a full day event (and that the headline speaker would be Steve Woznaik) I made sure to block the day off on my calendar.

The conference was really well put together, kudos to the KC board members and everyone else involved with pulling it off. The atmosphere was described as “VMworld-like” and I’d have to agree.

In addition to Mr. Woznaik, there was a nice sprinkling of rock stars from the VMware community. @scott_lowe was there giving a presentation on how to be more organized (should have taken notes), @andreleibovici gave some interesting insights into the future of virtual end user computing, and Mr. Irish Spring (who goes by @irishyespring on Twitter but doesn’t tweet much) was there.

Irish Spring kind of sold me on VMware. Mid-2000s when I was just getting settled into my first real system administration job, I went to a presentation by Irish on (among other things) virtual desktop infrastructure. At the time, my position involved building desktop images for the university, and providing a big chunk of tier 3 support to our help desk and desktop support people. We’d just started to get our feet wet in virtualization the summer before, and prior to Irish’s presentation I’d never even considered virtualizing desktops. I came away from that meeting really jazzed up about VMware. I knew the issues our team was struggling with as well as the issues our faculty and staff struggled with when it came to computer labs. I went home and spent the rest of the evening essentially architecting and putting together the proposal to my boss that would eventually be Rockhurst University’s VDI project. This is the project that led to all the accolades and awards for me and the university. But that’s another story.

Irish, his energy and enthusiasm, rubbed off and made me go out and do some really great things. It was ironic that the center of his speech at the crowd was getting your head out of IT and into the business processes to see how you can use your knowledge to advance the business. (Before the business processes feel they need to come help synergize IT.) He spoke a lot about using the “big brains” we have to do more than just patch servers. IT people get to see the underbelly of the beast, and can do more than just be gatekeepers by helping to see things from the viewpoints of different stakeholders.

I couldn’t agree more.

Steve Jobs

One hundred years from now, people will talk about Steve Jobs the same way we do of Alexander Graham Bell, Thomas Edison, Henry Ford and the Wright brothers. Perhaps, as my friend Chris helped pointed out, he was a mix of Edison and John Lennon. Maybe he was a bit like Walt Disney, or Jim Hensen, a man who was personally tied to the brand he created.

Regardless, he was an an inventor, a visionary, a man full of ideas. He was more than just any businessman, CEO to Apple, he personally held patents for many of the technologies used in their products. He was the perfect mix of creative genius and salesman. In the tech world, Steve Jobs was elevated to near deity-like status, but as cancer proved, he was still just a man.

Every CEO of every company on the planet should pay attention to this right now and ask themselves, “why won’t this happen when I die?” (@jayfanelli)

I tried to sit down and put together my thoughts on his passing last night, but couldn’t. I was too overcome with the emotions pouring out from people across the world on Twitter. I shared some of my own but it was interesting to watch the wake for a man happen in real time from people all across the world. People who loved and hated him all had emotions to share.

Even President Obama had something to say:

The world has lost a visionary. And there may be no greater tribute to Steve’s success than the fact that much of the world learned of his passing on a device he invented. Michelle and I send our thoughts and prayers to Steve’s wife Laurene, his family, and all those who loved him.

But I’m not sure those outside of the technology community could really feel the impact the way we all did. My wife didn’t understand last night why I was grieving for a man I’d never met, the founder of a company that now rivals ExxonMobil as the world’s largest. Without meeting him, Steve Jobs had a profound impact on my life. I credit him (and Bill Gates) for sparking my interest in technology… for making me what I am today.

The first computer I ever used was an Apple II when I was in kindergarden. Later, I learned how to do amazing things on some of the first Macintosh systems. I used to skip recess to go down to the elementary school library so that I could learn on devices that he helped create. And while my family can attest to later holding Apple and their products in contempt through much of the mid-90s, while pounding the drum of Microsoft, I later came back to the “distortion field” as Steve brought real innovation back to the industry.

The Apple II, the Macintosh, Pixar (who doesn’t love Toy Story), iPod, iPhone, iPad, iTunes. Disruptions to the status-quo. Disruptions that are all because of the leadership and creative mind of Steve Jobs. I don’t remember much about what computers were like before the Apple II or the Mac, but I know what movies were like before Pixar. I know what buying music was like before iTunes and the iPod. I know what phones were like before the iPhone, and I love my iPad. I wouldn’t want to go back to a world before the things Steve created, existed. Even if you’re a hardened Android fan, you have to remember what smartphones were like before the iPhone and thank Apple and Steve Jobs for setting a new trend. Even if you’re a Microsoft fanatic, you have to thank him for keeping Bill on his toes for all those years, and forcing each other to continue to innovate.

In my article last week, prior to the announcement of the iPhone 4S, I said this:

I still maintain that Steve Jobs will be present at the announcement, even after his recent retirement as Apple CEO. I think he will be there to hand it off to Tim Cook in some way, or perhaps participate in some FaceTime chat to highlight a new iOS 5 feature. At the very least, his presence will be felt.

There was an empty chair, in the front row of the hall, with a cloth wrapped around it marked Reserved. That was no doubt a chair for Steve, one he wouldn’t be in because of what we all now know. I think Apple knew this was coming soon, and probably played the announcement a bit low-key as to not attempt to overshadow what could have probably happened any day. That said, I have no doubt that Steve wanted to see one last keynote, one last product launch, before he passed on. His presence was felt. His presence will continue to be felt with every future Apple product.

At 56, Steve Jobs did more than most people do in 90 years. He was the original Apple genius, a master showman, and the original tech virtuoso. He will be missed.

AT&T Fun

I’ve had AT&T’s U-verse service since October 2009, the day we moved into our house. At it’s heart, it’s really a fantastic service offering… IPTV, whole home DVR, advanced DSL, all wrapped up into a nice package. But for the last 6 months I’ve been struggling with a lot of different issues ranging from broken DVRs, freezing TV signal to Internet connections that go away at random. While the issues have not been persistent enough to track down an exact cause, they’ve been frustrating.

The other day, after watching Face Off on HBO (for the first time, I know) and getting right to the climax of the movie, the whole TV signal froze and wouldn’t come back. It was 1AM and my wife was already sleeping, so I muted by frustration and went to be deciding to look into alternatives the next day.

Monday, I called up the two traditional cable providers in the area looking for pricing. Then, I hit Twitter with my plan:

Thinking of dumping AT&T U-verse for Surewest, anyone in KC area have any experience with them?

I actually didn’t get any responses from Surewest customers. What I did get was a little more surprising.

  1. A reply from Ron, a Surewest social media manager saying hi. Fairly standard stuff. (see here)
  2. A reply from an AT&T social media manager, asking for my phone number. This was a little more interesting. (see here)

I decided to DM my number to the AT&T manager, figuring what could it hurt? A little while later I get a call from a Jessica. She asks me what my issues are, and then vows to take care of them if I can wait a couple days while she follows up on them. I said sure, halfway thinking nothing was going to come from it.

Today I get a call from Diane in the “office of the President” of AT&T. Diane has obviously been talking to Jessica, knows what my issues are, and asks if I’ll stay on the line while they get one of their engineers on the line. Right before Diane hands me off to him (I neglected to write down his name) she gives me her direct phone number to contact her to follow up, and then the engineer runs some tests to see whats going on with my service. He schedules a tech to come up the same day and tonight that tech comes out and tests every line and piece of their equipment in my house.

Rick the technician ends up re-terminating some connections, and replacing my “Residential Gateway” (modem/router) with a model that within seconds proves it’s light years ahead of the previous version. We have a nice chat about networking, technology, etc. He leaves.

Where is this all going?

I’m consistently amazed with the level of customer service that a monolithic company like AT&T manages to provide for U-verse. Truth be told, this is not my first positive experience with them. Every time I’ve called their technical support for any type of issue, either with my setup or family who has the service, the people have always been friendly and helpful. They’re well trained, and for the most part seem to know what they’re talking about. Granted, they could invest in some better equipment, but I have yet to have an experience with one of their employees that put a bad taste in my mouth.

The fact that one of America’s largest corporations is monitoring their Twitter feed and pro-actively trying to correct issues that customers have, is really pretty awesome.

Customer service in America, on the whole, has gone to crap in the last 10 years. Ironically, it’s companies like AT&T with their advanced networks that can put an army of poorly trained and poorly paid people in call centers all around the world, that corporate America have used to reduce their bottom line. But thankfully AT&T themselves don’t seem to be following the trend they’ve helped create.

I need to call Diane back tomorrow and thank her. Now, hopefully the service will be stable enough that I don’ t need to even call for support again. If not, I know who to talk to.


Originally published at techvirtuoso.com on April 27, 2011.