vExpert 2013

Earlier today, John Mark Troyer announced the 2013 vExpert list.

Shockingly, I made the cut, and I’m beyond honored. One of 580.

Full disclosure: I originally wrote this entire blog post earlier today from the point of view that I wasn’t included, so I’d have something ready to go discussing how I plan to increase my involvement in the community and try again next year. Except for announcing that I actually was selected, none of that outlook changes.

I wasn’t even sure if I’d apply for it when the application/self-nomination form went up last month, because I knew I’d not done anywhere enough to contribute at the level as the current vExperts. That being said, I threw my name into the mix and have been waiting patiently since then to find out the results. While I’ve been tweeting and engaging people online about virtualization for a while now, I made it my mission a couple of years ago do do more. It’s difficult with other obligations like work, family, etc, to (after all that) spend a lot of time giving back, but I will. (To be honest I’m not sure how some of the current vExpert folks do it.)

Now that I’ve actually been selected, there is a huge weight to do more, in order to prove myself worthy of this selection, but also because of the realization that this is only for one year and this is something to continue to participate in. This year I hope to contribute a lot more in the way of tutorials on this site, regular news updates, and Twitter/social networking participation. I also need to dive deeper into providing assistance on the official VMware Communities site, something I’ve avoided doing so far.

I also need to go to VMworld this year.

For the sake of everyone who doesn’t know a lot about the vExpert program, this doesn’t mean I am suddenly imbued with all the knowledge of VMware’s various applications. As John said over on the VMware site:

“I want to personally thank everyone who applied and point out that a “vExpert” is not a technical certification or even a general measure of VMware expertise. The judges selected people who were particularly engaged with their community and who had developed a substantial personal platform of influence in those communities. There were a lot of very smart, very accomplished people, even VCDXs, that weren’t named as vExpert this year.”

I hope to continue to learn and share as much as I can about VMware, and continue to be an evangelist for them.

Congrats to everyone who made the cut. I look forward to continue engaging with all the other vExperts, and the rest of the community, in the coming year.

Update: Originally the list had 575 names, then 579, now 580. Also, shout out to my local KC VMUG people, who I also promise to attend meetings with regularly in the future.

Huawei is calling it quits in the United States

Probably because most US businesses were not to excited to base their infrastructure on the technology of a company that stole Cisco code, and is run by former members of the Chinese military. Not too long ago Sprint and Softbank had to agree to a request by the US intelligence community to rip and replace any Huawei equipment they had on the Sprint network as a condition for their upcoming merger.

I’ve actually only heard first hand of one my customers with Huawei devices in production, and oddly enough it was their storage system (which until that point I didn’t even know they had a hand in.) Unfortunately I didn’t get a chance to see what it looked like.

VMware has updated its certification names/logos, again

VMware has updated its certification names logos and logos, again. I guess nothing lives forever, nothing stays the same.

What was the VCP until September of last year was originally going to be the VCP-DV, is now the VCP-DCV. The VCP-DT is still the VCP-DT, but the master level certification, the VCDX, has become the VCDX-DCV. Logos have also been updated. “Data Center” is now two words instead of “Datacenter” because apparently that considered industry standard (I didn’t realize there was such a thing.)

Good thing I was waiting to order new business cards until after I could add a VCAP certification.

Oracle VM 3

Oracle VM 3 improved a lot, they are not close to Microsoft or VMware, but it is pretty good if you are not trying to do dramatic things like moving virtual machines around.

Gartner’s vice president and distinguished analyst Thomas Bittman, talking about how Oracle VM is poised to be the real competitor for VMware in the future. Not Hyper-V. Not Xen. I’m not one to really defend Microsoft or Citrix, but… have you ever actually seen Oracle VM running on a production system?

Using CDP with vSphere hosts

The other day I was tasked with adding a new VLAN to a customer’s vSphere cluster. The existing network configuration had just the default VM Network setup, with no trunks or tagged port groups setup. In this case the customer is in the process of adding a few virtual desktops (Citrix, blah) and wanted a separate DHCP scope for those machines.

In order to setup this VLAN I would need to put each host in maintenance mode, reconfigure the physical switch ports that were providing connectivity to that host from access ports to trunk ports, add tags to the existing VM Network and Service Console, and then provide connectivity to their new VLAN by adding a new port group tagged to that VLAN number.

(Note: if you need to trunk the connection the Service Console/Management Network uses, change the VLAN tag before you adjust the physical switch port settings. You’ll lose connectivity to the host temporarily until you change the switch port settings.)

I set about trying to determine where each of the physical NIC ports on the hosts were plugged into their core switch. There are a few options to do this:

  1. Hope that the customer has proper documentation of their environment, from the initial setup and any changes that were made, indicating the switch ports. In this case, the customer did not.
  2. Hope that the switch has comments that indicate what is physically connected to it. In this case, there were no comments.
  3. Physically trace out each connection back to the switches. In this case, we were in the middle of a major winter storm in Kansas City, so I was working remote for the customer.
  4. Use networking commands on the switch to attempt to identify what is plugged into each port.

You might expect that the MAC addresses of the vSwitch’s individual NICs would be listed in the results of a “show mac address-table dynamic” on the switch — except they aren’t. You can see the vNIC this way, but not the pNICs.

If you open the vCenter GUI and go to the Configuration > Networking section, next to each of the physical adapters configured in a vSwitch, you’ll see a blue box. Click on it, and if you’re using Cisco switches (and why wouldn’t you) you’ll see all the data about the switch, port, and configuration of the network port.

You’ll also get these results if you’re running on a UCS chassis against a Nexus switch, but in a slightly different format. With the UCS and other blade chassis type systems you can actually find other ways to determine the switch port you’re connected to, but that’s a topic for another blog post (and once I get more experience on the UCS.)

What if none of this works?

If all this doesn’t work for you, make sure you’re using Cisco switches. CDP is a proprietary protocol, so your Dell, HP, Juniper, 3Com, Netgear, Trendnet, SuperCheapNet switches are probably going to give you any of this data.

However, as of ESXi 5.0, VMware does support Link Layer Discovery Protocol (LLDP), which is the IEEE standardized version of CDP. The problem is they only support it with Distributed vSwitches, which requires Enterprise Plus licensing. A lot of the environments I work in either don’t have that licensing and/or have not adopted Distributed vSwitches. For reasons unknown, VMware does not support LLDP on regular vSwitches. (For more information on how to use LLDP check out Ivo Beerens’ post.)

If you’ve got Cisco equipment, but it’s still not working, make sure CDP is enabled on your hosts. As of ESX 3.5, it should be by default but it may have been disabled. For more information on how to troubleshoot this check out VMware KB1003885.

Determining the layout of vSphere host memory

Memory utilization is important in VMware, most of the time it’s the most limiting factor in the virtual to physical consolidation ratio. Often times I’m tasked with assessing how upgradable a physical host’s current memory configuration is. It’s easy to see from the vSphere Client how much memory you have installed in a host, but when you’re upgrading you need to know exactly how that memory is laid our on your motherboard so you can get the most bang for your buck.

There are basically three ways to do this:

  1. Open up the case and see. This is going to require downtime (because you wouldn’t open the case while you’re running production systems, right?) This is all well and good because you can just vMotion your virtual machines to another host and shut it down. Problem is, if you’re having memory utilization issues, chances are you’re overcommitting on your hosts, so you’re going to need to shut down virtual machines to do this.
  2. Use an out-of band-management utility like DRAC or iLO. Great if your server has them configured, but a lot of people either don’t realize they have these or don’t bother to set them up until someone points out how useful they are. Usually to configure them requires a reboot of the host which means downtime, and I just explained why that’s probably not great in this situation.
  3. SSH into your hosts and run a couple of commands. This is what I’m going to explain how to do.

Everything I’m going to show you is documented from the VMware KB. If you’d rather refer to those go here for ESXi 4.x/5.x or go here for ESX 3.x/4.x. Make sure you know what version you’re checking, so you can use the right commands.

ESXi 4.x/5.x

The first thing you’ll need to do is enable SSH on your hosts. Best practice is to leave SSH off and only turn it on when you need it. You can enable it by opening up the vSphere Client, selecting the Host and Clusters view, and then selecting the host you want to enable SSH on in the left hand window. Select the Configuration tab, and then Security Profile from the options on the left. Under services you’ll see SSH. Click on Properties, select SSH from the list of services, and then press Options. In the window, press Start to enable the SSH service. Leave the settings that ask you about starting this service automatically set to manual. For security, you don’t want SSH turned on all the time. You’ll also get warnings from each host it’s enabled on if you leave it turned on. When we’re done you’ll want to come back here and disable SSH on your host. (Note: If you’ve previously closed port 22 on your ESXi firewall, you’ll need to open that back up. By default the port is open but the service is not running.)

At this point you need to SSH into your host as root. Keep in mind unless you joined your ESXi box to your Active Directory domain, you probably can’t just use your normal network account to get into the host this way. It’s going to be root or another local account you’ve created.

If you’re on Windows, I suggest using Putty. If you’re on a Mac or Linux box, no need to download anything extra as it’s all built in. Just open up Terminal and away you go.

(I’m normally a Mac user, but I access my work demo lab through a Windows 7 virtual machine running on VMware View. So here is the results from Putty.)

You’ll want to do is navigate to a location you can easily access through the vSphere datastore browser. The reason is we’re going to be running a command and outputting the results to a text file so we can easily get the information we want. I suggest using a local disk on the host, ISO/template datastore or maybe a shared datastore that you use for things like dumping host logs. The output file is going to just a few MBs, so it’s not really critical as long as it’s easily accessible. When we’re done we’re going to delete it from the host.

cd /vmfs/volumes/YOUR_DATASTORE

You’ll notice that the result for your command will change your current directory to something like this: /vmfs/volumes/4ea066d9-d9f09a90-c026–0025b5aa002c — This is normal. Do not be alarmed.

At this point we’re going to run the command that will query the system for all the physical hardware, and export it to a text file.

cim-diagnostic.sh > YOUR_SERVER_NAME.txt

You can call the file after the > whatever you want. Most of the time I keep it unique because I’m going to be doing this command on multiple systems and want to easily identify which one it came from.

At this point you can go back to the vSphere Client and open up the Datastore Browser on the datastore you ran the command on. You can get to this easily by clicking on the host in Host and Clusters and then under the Summary page, right clicking on the datastore listing and then Browse Datastore.

Use the Datastore Browser to download the file to your desktop. (Right click file > Download)

Now the problem with this file is that Notepad doesn’t know how to handle the way ESXi outputs the file, so when you open it up it looks a little something like this:

I would suggest opening the file in something like Notepad++ which is really far more useful and can read the log file correctly. It’s also helpful for other VMware logs that don’t save whitespace in a way Notepad likes. (Note, Mac users can open the file in TextEdit just fine.)

Run a search within the document and find the section that starts as Dumping instances of CIM_PhysicalMemory. You’ll see the first entry as Tag = 32.0 and if you scroll down all the way though the section it’ll go until run out of memory slots. For instance, the server I ran my export on is a Cisco UCS B250 with 46 memory slots, so the last entry will be 32.45.

The key bits of information here are things MaxMemorySpeed and Capacity if you’re trying to figure out what to buy. Capacity is listed in bytes, so 4294967296 is going to be a 4GB DIMM. There is also lots of other good information in the export such as the position of the DIMM on the motherboard, the node and channel the memory is utilized by, or if the slot is even in use, as well as things like serial numbers and part numbers.

At this point you can delete the file from the host, if you choose, either by utilizing the Datastore Browser or at the SSH session you may still have open.

rm YOUR_SERVER_NAME.txt

Now you can close your SSH session, and turn SSH back off on your host in the same section where you previously turned it on.

ESX 3.x/4.x

The method for obtaining this information on ESX is similar to the ESXi method explained above, the only real difference is that the command utilized is different and the output file isn’t as detailed (although it’s much easier to read.)

The first thing we’re going to need to do is enable SSH on the host. On ESX 3.x/4.x, SSH is disabled by default for the root account on an ESX host. The SSH service does not allow root logins. Non-root users are able to login with SSH, which you can then elevate this account to the root user. As an alternative to enabling SSH on your host, you can physically login to the console of the host and run the commands as well.

From VMware KB 8375637:

If you do not have any other users on the ESX host, you can create a new user by connecting directly to the ESX host with VMware Infrastructure (VI) or vSphere Client. Go to the Users & Groups tab, right-click on the Users list and select Add to open the Add New User dialog. Ensure that the Grant shell access to this user option is selected. These options are only available when connecting to the ESX host directly. They are not available if connecting to vCenter Server.

If you’re on Windows, I suggest using Putty. If you’re on a Mac or Linux box, no need to download anything extra as it’s all built in. Just open up Terminal and away you go.

(I’m normally a Mac user, but I access my work demo lab through a Windows 7 virtual machine running on VMware View. So here is the results from Putty.)

After logging in to your host with your regular user account we need to elevate to root user:

su -

You’ll be prompted for your root password. Enter it now.

You’ll want to do is navigate to a location you can easily access through the vSphere datastore browser. The reason is we’re going to be running a command and outputting the results to a text file so we can easily get the information we want. I suggest using a local disk on the host, ISO/template datastore or maybe a shared datastore that you use for things like dumping host logs. The output file is going to just a few MBs, so it’s not really critical as long as it’s easily accessible. When we’re done we’re going to delete it from the host.

cd /vmfs/volumes/YOUR_DATASTORE

You’ll notice that the result for your command will change your current directory to something like this: /vmfs/volumes/4ea066d9-d9f09a90-c026–0025b5aa002c — This is normal. Do not be alarmed.

At this point we’re going to run the command that will query the system for all the physical hardware, and export it to a text file.

smbiosDump > YOUR_SERVER_NAME.txt

You can call the file after the > whatever you want. Most of the time I keep it unique because I’m going to be doing this command on multiple systems and want to easily identify which one it came from.

At this point you can go back to the vSphere Client and open up the Datastore Browser on the datastore you ran the command on. You can get to this easily by clicking on the host in Host and Clusters and then under the Summary page, right clicking on the datastore listing and then Browse Datastore.

Use the Datastore Browser to download the file to your desktop. (Right click file > Download)

Run a search within the document and find the section that starts as Physical Memory Array. You should see a summary that lists how many slots the system has, as well as the maximum memory size. Then there will be an entry listed for each memory slot. For instance, on the Dell R710 I ran an export on, there were 18 slots for a maximum of 192GB. If there is memory installed in the slot you’ll see the size of the DIMM, otherwise you’ll see No Module Installed under size.

At this point you can delete the file from the host, if you choose, either by utilizing the Datastore Browser or at the SSH session you may still have open.

rm YOUR_SERVER_NAME.txt

Now you can close your SSH session.

Don’t use a single 100mb vNIC

  1. You really should never use 100mb networking with VMware for much of anything. I’m not even sure 100mb networking has any place in a modern datacenter, except maybe cheap connectivity to something like an iLO/DRAC.
  2. You should avoid using a single vNIC for any vSwitch, unless you just don’t care about things like load balancing or network redundancy.
  3. Not seen in the image, but Service Console/Management Network should not be on the same vSwitch as your VM Network port group. Good luck accessing your ESX host when all the bandwidth on your 100mb connection is used up by virtual machine traffic.
  4. The particular host in question did not have any vMotion setup, and/because there was no shared storage for the hosts in the “cluster” — term used loosely.
  5. Any combination of the above is grounds for removal of virtualization privileges.

vCOPS for View download is borked

I’ve been on a View 5.1 deployment with a customer all week, and part of the project involved deploying VMware vCenter Operations Manager (vCOPS) for View, version 1.01. I’ve done this a couple times before, and had no issues getting the Linux OVA base vApp configured. Then when I went to install the View adapter into a Windows VM, I got a strange message about how this installer was a 32-bit application and not able to run on a 64-bit system.

Two things wrong with this:

  1. Normally 32-bit apps run on 64-bit operating systems, unless they’re specifically configured not to.
  2. vCOPS for View is a 64-bit application, with a 64-bit installer. The system requirements state it can only run on Windows 2008 R2 or Windows 2003 R2 64-bit.

After playing around with the 1.01 installer, and attempting to download and start the installer for 1.0 just fine on the same system, I notice that the published file size on VMware.com is 22MB, but the 1.01 installer I was downloading was only 16MB. I ran an MD5 checksum on this file and it didn’t match the published checksum on the website either. The file creation date shows sometime in late December, while the published file date is somewhere in early October.

Eventually I was able to find a copy of a previously used 1.01 installer on another system, ran a checksum on it, and it matched the published checksum. Installed the adapter using this file and it worked just fine. Customer vCOPS environment is up and running.

I have a support case in with VMware right now letting them know about this issue, hopefully they get it corrected soon. I realize it’s not a particularly popular product compared to something like vSphere or even a View Connection Broker, but it’s hard to see how this could have gone on for a while (nearly a month) without someone else noticing?

TL;DR vCOPS for View 1.01 installer on vmware.com is screwed up, I’m working with VMware to get it fixed.

Faking an SSD drive in vSphere

Notes on tricking VMware into thinking a datastore is actually an SSD drive. Very useful if you’re in a lab environment and want to just test some of the features in vSphere 5 that center around flash storage (but don’t have the funds to dedicate to actually having it.)

However it’s also useful if you actually have flash storage in a production environment but for some reason vSphere isn’t recognizing that fact.

svMotion will rename underlying folders/files, once again

VMware has released Update 2 of vSphere 5.0, and among the fixes is one that should stand out as correcting the loss of a nice feature. Performing a Storage vMotion of a virtual machine will once again rename the underlying folder and VMDK files associated with the machine.

vSphere 5 Storage vMotion is unable to rename virtual machine files on completing migration

In vCenter Server, when you rename a virtual machine in the vSphere Client, the vmdk disks are not renamed following a successful Storage vMotion task. When you perform a Storage vMotion of the virtual machine to have its folder and associated files renamed to match the new name. The virtual machine folder name changes, but the virtual machine file names do not change.

This behavior was present in the 4.x branch, and was annoyingly removed from 5.0. Thankfully, VMware dropped it back in. Note that this feature is still missing from vSphere 5.1, but can only be assumed that it will be added in a future update release.

Kansas City VMUG Conference

I used to go to a lot more of the Kansas City VMUG meetings back before I became a consultant (and had more control over my own schedule) but when I saw there would be a full day event (and that the headline speaker would be Steve Woznaik) I made sure to block the day off on my calendar.

The conference was really well put together, kudos to the KC board members and everyone else involved with pulling it off. The atmosphere was described as “VMworld-like” and I’d have to agree.

In addition to Mr. Woznaik, there was a nice sprinkling of rock stars from the VMware community. @scott_lowe was there giving a presentation on how to be more organized (should have taken notes), @andreleibovici gave some interesting insights into the future of virtual end user computing, and Mr. Irish Spring (who goes by @irishyespring on Twitter but doesn’t tweet much) was there.

Irish Spring kind of sold me on VMware. Mid-2000s when I was just getting settled into my first real system administration job, I went to a presentation by Irish on (among other things) virtual desktop infrastructure. At the time, my position involved building desktop images for the university, and providing a big chunk of tier 3 support to our help desk and desktop support people. We’d just started to get our feet wet in virtualization the summer before, and prior to Irish’s presentation I’d never even considered virtualizing desktops. I came away from that meeting really jazzed up about VMware. I knew the issues our team was struggling with as well as the issues our faculty and staff struggled with when it came to computer labs. I went home and spent the rest of the evening essentially architecting and putting together the proposal to my boss that would eventually be Rockhurst University’s VDI project. This is the project that led to all the accolades and awards for me and the university. But that’s another story.

Irish, his energy and enthusiasm, rubbed off and made me go out and do some really great things. It was ironic that the center of his speech at the crowd was getting your head out of IT and into the business processes to see how you can use your knowledge to advance the business. (Before the business processes feel they need to come help synergize IT.) He spoke a lot about using the “big brains” we have to do more than just patch servers. IT people get to see the underbelly of the beast, and can do more than just be gatekeepers by helping to see things from the viewpoints of different stakeholders.

I couldn’t agree more.

Samba, more than SMB

Samba 4.0 comprises an LDAP directory server, Heimdal Kerberos authentication server, a secure Dynamic DNS server, and implementations of all necessary remote procedure calls for Active Directory. Samba 4.0 provides everything needed to serve as an Active Directory Compatible Domain Controller for all versions of Microsoft Windows clients currently supported by Microsoft, including the recently released Windows 8.

This is certantly interesting, I didn’t realize this was in development, but the latest version of Samba (the popular free open source implemenetation of the Windows file sharing protocols) now lets you run a true open source equal to Microsoft Active Directory.

If someone were to bundle a Linux OVA file with this new Samba domain controller code, along with the vCenter Server Appliance (based on Linux), and the VMware Web Client, the reliance on Microsoft Windows servers to have a functional VMware environment is eroding.

Steve Jobs

One hundred years from now, people will talk about Steve Jobs the same way we do of Alexander Graham Bell, Thomas Edison, Henry Ford and the Wright brothers. Perhaps, as my friend Chris helped pointed out, he was a mix of Edison and John Lennon. Maybe he was a bit like Walt Disney, or Jim Hensen, a man who was personally tied to the brand he created.

Regardless, he was an an inventor, a visionary, a man full of ideas. He was more than just any businessman, CEO to Apple, he personally held patents for many of the technologies used in their products. He was the perfect mix of creative genius and salesman. In the tech world, Steve Jobs was elevated to near deity-like status, but as cancer proved, he was still just a man.

Every CEO of every company on the planet should pay attention to this right now and ask themselves, “why won’t this happen when I die?” (@jayfanelli)

I tried to sit down and put together my thoughts on his passing last night, but couldn’t. I was too overcome with the emotions pouring out from people across the world on Twitter. I shared some of my own but it was interesting to watch the wake for a man happen in real time from people all across the world. People who loved and hated him all had emotions to share.

Even President Obama had something to say:

The world has lost a visionary. And there may be no greater tribute to Steve’s success than the fact that much of the world learned of his passing on a device he invented. Michelle and I send our thoughts and prayers to Steve’s wife Laurene, his family, and all those who loved him.

But I’m not sure those outside of the technology community could really feel the impact the way we all did. My wife didn’t understand last night why I was grieving for a man I’d never met, the founder of a company that now rivals ExxonMobil as the world’s largest. Without meeting him, Steve Jobs had a profound impact on my life. I credit him (and Bill Gates) for sparking my interest in technology… for making me what I am today.

The first computer I ever used was an Apple II when I was in kindergarden. Later, I learned how to do amazing things on some of the first Macintosh systems. I used to skip recess to go down to the elementary school library so that I could learn on devices that he helped create. And while my family can attest to later holding Apple and their products in contempt through much of the mid-90s, while pounding the drum of Microsoft, I later came back to the “distortion field” as Steve brought real innovation back to the industry.

The Apple II, the Macintosh, Pixar (who doesn’t love Toy Story), iPod, iPhone, iPad, iTunes. Disruptions to the status-quo. Disruptions that are all because of the leadership and creative mind of Steve Jobs. I don’t remember much about what computers were like before the Apple II or the Mac, but I know what movies were like before Pixar. I know what buying music was like before iTunes and the iPod. I know what phones were like before the iPhone, and I love my iPad. I wouldn’t want to go back to a world before the things Steve created, existed. Even if you’re a hardened Android fan, you have to remember what smartphones were like before the iPhone and thank Apple and Steve Jobs for setting a new trend. Even if you’re a Microsoft fanatic, you have to thank him for keeping Bill on his toes for all those years, and forcing each other to continue to innovate.

In my article last week, prior to the announcement of the iPhone 4S, I said this:

I still maintain that Steve Jobs will be present at the announcement, even after his recent retirement as Apple CEO. I think he will be there to hand it off to Tim Cook in some way, or perhaps participate in some FaceTime chat to highlight a new iOS 5 feature. At the very least, his presence will be felt.

There was an empty chair, in the front row of the hall, with a cloth wrapped around it marked Reserved. That was no doubt a chair for Steve, one he wouldn’t be in because of what we all now know. I think Apple knew this was coming soon, and probably played the announcement a bit low-key as to not attempt to overshadow what could have probably happened any day. That said, I have no doubt that Steve wanted to see one last keynote, one last product launch, before he passed on. His presence was felt. His presence will continue to be felt with every future Apple product.

At 56, Steve Jobs did more than most people do in 90 years. He was the original Apple genius, a master showman, and the original tech virtuoso. He will be missed.

Fun with AT&T U-verse

I’ve had AT&T’s U-verse service since October 2009, the day we moved into our house. At it’s heart, it’s really a fantastic service offering… IPTV, whole home DVR, advanced DSL, all wrapped up into a nice package. But for the last 6 months I’ve been struggling with a lot of different issues ranging from broken DVRs, freezing TV signal to Internet connections that go away at random. While the issues have not been persistent enough to track down an exact cause, they’ve been frustrating.

The other day, after watching Face Off on HBO (for the first time, I know) and getting right to the climax of the movie, the whole TV signal froze and wouldn’t come back. It was 1AM and my wife was already sleeping, so I muted by frustration and went to be deciding to look into alternatives the next day.

Monday, I called up the two traditional cable providers in the area looking for pricing. Then, I hit Twitter with my plan:

Thinking of dumping AT&T U-verse for Surewest, anyone in KC area have any experience with them?

I actually didn’t get any responses from Surewest customers. What I did get was a little more surprising.

  1. A reply from Ron, a Surewest social media manager saying hi. Fairly standard stuff. (see here)
  2. A reply from an AT&T social media manager, asking for my phone number. This was a little more interesting. (see here)

I decided to DM my number to the AT&T manager, figuring what could it hurt? A little while later I get a call from a Jessica. She asks me what my issues are, and then vows to take care of them if I can wait a couple days while she follows up on them. I said sure, halfway thinking nothing was going to come from it.

Today I get a call from Diane in the “office of the President” of AT&T. Diane has obviously been talking to Jessica, knows what my issues are, and asks if I’ll stay on the line while they get one of their engineers on the line. Right before Diane hands me off to him (I neglected to write down his name) she gives me her direct phone number to contact her to follow up, and then the engineer runs some tests to see whats going on with my service. He schedules a tech to come up the same day and tonight that tech comes out and tests every line and piece of their equipment in my house.

Rick the technician ends up re-terminating some connections, and replacing my “Residential Gateway” (modem/router) with a model that within seconds proves it’s light years ahead of the previous version. We have a nice chat about networking, technology, etc. He leaves.

Where is this all going?

I’m consistently amazed with the level of customer service that a monolithic company like AT&T manages to provide for U-verse. Truth be told, this is not my first positive experience with them. Every time I’ve called their technical support for any type of issue, either with my setup or family who has the service, the people have always been friendly and helpful. They’re well trained, and for the most part seem to know what they’re talking about. Granted, they could invest in some better equipment, but I have yet to have an experience with one of their employees that put a bad taste in my mouth.

The fact that one of America’s largest corporations is monitoring their Twitter feed and pro-actively trying to correct issues that customers have, is really pretty awesome.

Customer service in America, on the whole, has gone to crap in the last 10 years. Ironically, it’s companies like AT&T with their advanced networks that can put an army of poorly trained and poorly paid people in call centers all around the world, that corporate America have used to reduce their bottom line. But thankfully AT&T themselves don’t seem to be following the trend they’ve helped create.

I need to call Diane back tomorrow and thank her. Now, hopefully the service will be stable enough that I don’ t need to even call for support again. If not, I know who to talk to.


Originally published at techvirtuoso.com on April 27, 2011.

Google stripping support for H.264 video out of Chrome

In a surprise announcement on the Chromium Blog today, Google announced that they would be phasing out H.264 support from the Google Chrome web browser, in favor of the open sourced WebM standard. The announcement further muddies the waters of HTML5 video support.

To that end, we are changing Chrome’s HTML5 <video> support to make it consistent with the codecs already supported by the open Chromium project. Specifically, we are supporting the WebM (VP8) and Theora video codecs, and will consider adding support for other high-quality open codecs in the future. Though H.264 plays an important role in video, as our goal is to enable open innovation, support for the codec will be removed and our resources directed towards completely open codec technologies.

What is unclear is how Google can on one hand say that their goal is to enable open innovation, and yet still justify bundling the proprietary Adobe Flash plugin with Chrome.

The biggest supporter of H.264 in HTML5 video comes from Apple, which uses it in Safari, specifically on the iPhone, iPad and other iOS platform devices. Because Steve Jobs doesn’t like to run Flash unless he’s had a few drinks first, and even then only with protection, there is no Flash support on any iOS device. If WebM were to take off, Apple would need to act to incorporate support or leave millions of iOS users unable to load most web video sites.

However, the chances of a clear winner emerging from all of this is unlikely.

Prior to this announcement, Chrome had the unique distinction of being the only major browser to support both technologies. Firefox has never supported H.264 and will not in the next version, but Internet Explorer 9 which will be released sometime in 2011, does. Currently the only other mainstream browser that supports WebM is Opera, but Firefox 4 will enable support for that technology after it is released. Safari provides no support for WebM, nor does any current or future version of Internet Explorer.

Factor in Ogg Theora, and you have a codec that is almost universally supported by Firefox, Chrome and Opera… just not Internet Explorer or Safari.

Confused? Yeah, me too.

The reasoning for all of this comes down to licensing, something most end users don’t care about. We’re generally just happy when technology works as advertised. But Google doesn’t want to pay anyone for anything they don’t have to, and supporting WebM means not paying as much money or being bound to a restrictive license agreement.

Chrome used to be the browser that would play any of the three major HTML 5 video formats. Going forward from today, it has voluntarily neutered itself.


Originally published at techvirtuoso.com on January 11, 2011.

The mythical Verizon iPhone has arrived

Somewhere deep in the heart of the AT&T headquarters, their executives are huddled around holding a vigil to mourn the loss of the exclusive US contract. Likewise, Google execs are probably throwing chairs at the wall screaming “I thought we had something special!”

No longer a mythical unicorn, the much anticipated Verizon iPhone is now a reality. Available February 3 for existing Verizon customers (props to them for that) and then February 10 for everyone else.

The new device is almost exactly like the old one except for some small differences:

  • CDMA radio instead of GSM, this also means a slightly altered external antenna design
  • Support for Verizon Mobile Hotspot, allowing 5 devices to connect to the iPhone and use Verizon’s data service

There are a few of differences with Verizon and AT&T that should be pointed out:

  1. Verizon’s data network is larger, meaning more bars in more places.
  2. AT&T’s data network is faster, meaning when you get service you’re going to cruise faster.
  3. CDMA technology doesn’t allow for simultaneous voice and data usage. If you’re on a call and want to look up on Google Maps where to meet your friend for lunch? Too bad. Gotta wait for your call to end.

The biggest disappointment, but not unexpected, is that the Verizon iPhone will not support LTE technology, which would have allowed for faster data transfers and simultaneous voice and data. However, given that Verizon’s LTE network just started rolling out a few months ago, this isn’t surprising that Apple chose not to support it. It would have also required further alterations to the iPhone.

The unknown right now is what version of iOS this new CDMA iPhone will run. Will the iOS 4.2.1 guts support it? Will it require a 4.2.2 update? Will we get 4.3? Will the GSM and CDMA phones run the same iOS version? Or will it all be some sort of carrier update that doesn’t involve the a new version of iOS?

Last, Apple COO Tim Cook left the door wide open to future networks when he said this contract with Verizon is multi-year but non-exclusive.

Let the Sprint iPhone discussion commence.

(Or T-Mobile, if anyone still cares about them.)

Updated: It seems that the new iOS version will be 4.2.5, via Engadget who got to play with one after the announcement.


Originally published at techvirtuoso.com on January 11, 2011.

Will you all please shut up about the Verizon iPhone?

The boys who cried wolf (AKA The Wall Street Journal, et al) are all indicating that Tuesday will be the announcement of the long awaited iPhone 4 on Verizon. I hope they’re finally right.

Not because I’m going to switch, no, I’m actually pretty satisfied with my AT&T service, having been a customer for a long while before the launch of the first iPhone. I’ll just be glad when the noise makers and complainers can have another option. I hope that Verizon’s network works better for them than AT&T (although I kinda also hope it’s just as bad) so that they’ll shut up. I also look forward to another network getting some of the load so that my service will be even more reliable than it already is.

I can’t be alone in this thinking, if AT&T’s network is so god damn horrible across the entire country as the people in San Fransisco and New York make it out to be, no one would use it. Fact is, myself and millions of other subscribers made the choice to use it long before the iPhone. I even used to live down the street from the world headquarters of Sprint, and still used AT&T because I got better service.

I’m not discounting that there are people with horrible AT&T service. I’ve been places where that is the case, I know people who have this problem on a regular basis. It sucks, but chances are no has one forced you to use an iPhone this whole time.

I’ll also be glad when this golden phone finally does arrive, so we can stop obsessing about it. The phone will come out, AT&T’s subscriber numbers will slightly decrease, Verizon will see an increase, Apple’s profits will go up. The sun will still rise in the east and set in the west. Choice is good, but the tech world needs to stop treating this like we’re awaiting the second coming of Christ, and treat this like what it is, like what happens all around the world with the iPhone on multiple carriers. The same phone, on another network.


Originally published at techvirtuoso.com on January 9, 2011.

Using LastPass and YubiKey to secure your online life

If the recent Gawker password breach (re)taught us anything, it’s the old and valued lesson of “don’t use the same password everywhere” — but as often as I repeat that phrase and cringe a little bit when I find out someone else did it, I’ve been just as guilty of this cardinal sin of network security myself… from time to time. It’s hard not to.

When you’re as active on the Internet as I am, it’s impossible to resist the urge to duplicate passwords, especially if you’re against writing them down. So you’re left to memorize them all, hope you don’t forget, and hope that you can later rely on the splendid password reset via email later on.

All of the Gawker fun also taught (or should have taught) website administrators like myself to take better care of their users. Gawker fouled up in a huge way (beyond simply exposing user data) by not taking proper steps to secure the information in their database once it was exposed. Gawker used an easily crackable cipher system (DES) which was depreciated by a new industry standard (AES) long ago.

Since the launch of this site, we’ve relied on third parties to act as the gatekeepers for user interaction. (First using JS-Kit/Echo and now Disqus) For you it has the benefit of not having to remember yet another password or create another account just to comment here. On the back end it allows us to focus on delivering content and less on keeping a database of user information secured. We’re relying on people with bigger and better security resources (Disqus, Open ID, Twitter or Facebook) to secure your presence on our site.

But what about every other site (or even the four mentioned above) … where you have to register a username, create a password, and keep it safe and secure. Remembering unique passwords for every site is impossible, using the same one is a no-no, writing them down and keeping them in your desk drawer isn’t practical or secure. What do you do with those passwords?

Password Management

Who hasn’t seen the Internet Explorer password prompt at least 10,000 times in their lives? Or the similar prompts from Firefox, Safari, Chrome, Opera, etc. Almost every browser created in this decade has included some sort of password manager, and almost anyone who has used them will tell you they’re all crap.

For one thing, they only work with one browser. For another, they’re almost as secure as the previously mentioned notebook of passwords. Last, they’re not really designed to keep you secure, they’re designed to be a convenient way to re-access commonly used websites.

Most of the time, I turned the feature off. The idea of using a password manager, until recently, seemed less secure than trying to just remember them all myself. That all changed recently.

LastPass

After previously being quite inefficient about password management for the past… well, ever… I decided it was time to get serious about securing my online life and in turn taking the burden of remembering all of the passwords myself. I started using LastPass a few months ago (before the Gawker breakdown) and had slowly begun the process of migrating my passwords into it. Originally I wanted to give it a chance to earn my trust before jumping feet first into the pool of letting someone else get all my passwords.

I selected LastPass after evaluating many alternatives. KeePass, 1Password, Roboform were among some of the ones I looked at. All great options, but not the one I went with in the end. Here’s why:

  1. LastPass runs on anything, everything, and it syncs all of the resources together. Windows, Mac, Linux, Internet Explorer, Firefox, Safari, Chrome, iPhone, Android, Blackberry, Windows Mobile, Windows Phone (just announced), even Symbian. Basically anything I could touch, had to give me the ability to access my passwords. LastPass has their competition beat there. Noticeably absent is Opera from the supported list. I don’t use Opera myself, but my guess is now that they have true plugin support the LastPass crew will probably add them to the list shortly.
  2. No password manager is perfect, but LastPass is close. It’s excellent about knowing what to fill in, what to save, what not to save, and when to step in and help.
  3. It’s free, for 95% of the service. However, as I usually do, I suggest shelling out the ridiculous $12 a year to get the premium version. Why? Because you get my next two important points…
  4. Mobile access. LastPass will work in any browser for free, but if you want to run it on your iPhone, Android, etc, you’re going to need the premium account. The app itself though, is free.
  5. Multifactor authentication through YubiKey. The free version will allow you to build your own key for multifactor, but if you really want to get serious about security you’re going to want to do it through a YubiKey. (Of course that key will also set you back $25)

Browser Integration

Having tested LastPass in both Google Chrome (10) and Mozilla Firefox (4), I can say that the Firefox version is superior, but not by much. When I initially tested LastPass, I did so through Google Chrome. The installer rounded up all of the passwords stored in the default password managers of Internet Explorer, Firefox and Google Chrome that were installed on my system and put them into LastPass. This made the initial learning curve very easy as I didn’t have to go through and train it for every single one I was already allowing the browsers to remember.

After my desktop, when I setup LastPass on my laptop it also sucked up the local cache and avoided duplicates of already integrated passwords.

There are a few key benefits that LastPass does that none of the integrated password managers will do, to save you time.

  1. When I create new accounts, LastPass will automatically detect it and offer to generate a random password for me based on my complexity requirements. It automatically fills in the data and saves it for future use. This works 99% of the time and normally requires little input or assistance from me.
  2. When ever I change my password on a website, LastPass will not only know my old password, offer a new password, it automatically saves the change in it’s cache.
  3. It syncs all the data across multiple browsers. It’s no longer a massive headache to test new browsers. Moving from Chrome to Firefox to IE and back again is painless (well, except for using Internet Explorer itself) — changes made in one browser migrate to all the other browsers.

Security

But putting all this data into the cloud must be insecure! And if may be… if you were using another provider.

LastPass, despite syncing all this information into the cloud, actually stores the password database itself on your local system. What LastPass has on its servers are one-way salted hashes, with all your real data stored locally in an AES-256 encrypted database. Your passwords are encrypted and decrypted on your local machine, not on their servers. What all this means is if someone were to hack LastPass and get your salted hashes, they’d be about as useful as a pile of salted meat. Without computing horsepower beyond what the top government security agencies of the world have, and a limitless amount of free time, it’s all worthless without your master password.

Which by the way, LastPass doesn’t have any idea what your master password is because they never have it. If you change it on your account, LastPass has to re-encrypt all the data and resend the hashes to their servers.

They also use SSL to further encrypt all of the already AES encrypted traffic between your system and their servers. However, the amount of data being sent back and forth is so small that there is little if any performance loss in your browser and your system hardly notices what’s going on.

Once the salted hashes of your password reaches their servers, when they go to back it up (which they do daily to Amazon’s S3 service) and store it offsite they further encrypt that data using GPG.

So make your master password strong, but something you can remember. A great website for coming up with new passwords is howsecureismypassword.net — it will literally tell you how long it would take someone with a desktop computer to brute force your master password. This is all assuming they gain access to your local database, etc. Want to know my master password? Too bad. I will tell you though, it would take you 564 billion years to crack it.

But, computing horsepower gets more powerful all the time. Brilliant programmers, hackers, and engineers come up with new ways to make them faster, string them together and take that 564 billion year number down a notch. Even with all this advanced encryption an enterprising hacker could still manage to get a key logger on your system and record your master password.

So what is a paranoid person like myself going to do to even the odds? Multifactor authentication.

YubiKey

Something you know, and something you have.

There are a lot of multifactor authentication methods out there. I won’t get into all of them, because in this case, LastPass really works best with only one. The YubiKey by Yubico.

The YubiKey is a small USB token about the size of a door key. It comes in any color you want as long as it’s black, or white, and there is just a one time cost of $25 for Yubico to send you the token. It’s tough, and easy to use. It’s crush proof and water proof, has no battery or moving parts. Just plug it into any USB slot on your computer and it’ll be recognized as a USB Input Device. Because of this there are no drivers required and it works on Windows, Mac or Linux automatically.

Once you receive your YubiKey the process of associating it with your LastPass account is straight forward and simple. When you load your browser, after entering your master password you get the prompt for your YubiKey. Touch the green button and away you go. It only adds a second to the authentication process and infinitely decreases your chances of having your account compromised.

But what about key loggers? Since this is just a fancy keyboard with only one key, can’t they log that? Sure. Here’s the problem.

YubiKey generates a random 44 character one time passcode that changes every time you generate it.

Each generated passcode is actually a AES-128 bit block containing an obfuscated unique secret ID for your YubiKey, a session counter, time stamp, session token, random values and a CRC-16 checksum. To sum it all up, a bunch of random stuff further encrypted into more random stuff.

What it amounts to, is that without both your master password and your YubiKey, no one is getting access to your accounts.

Strong Passwords per Site

But all this work is futile if you continue to use the same passwords as before, or allow the same passwords to be used on multiple websites or systems. Thankfully, LastPass provides an interesting tool called the Security Challenge that will locally decrypt and analyze your passwords, look for weak passwords and let you know what duplicates exist. I was shocked the first time I ran the analyzer, but now I work to squeak out every last bit to raise my score each week.

At this point I’m regularly generating 12–16 character random and complex passwords for every site I have accounts on. According to the latest score I’m among the top 1000 users of the tool ranking 942nd overall. Look out 941, I’m R*[email protected]@-ing for you.

The point is that I don’t know what any of my site passwords are, but each is unique and almost impossible to brute force in a reasonable amount of time (3 quadrillion years for the one mentioned above) — while it doesn’t make the chances of my Facebook account being compromised impossible, it significantly reduces the risk of such an event taking place. By the time someone tried it only a few times, Facebook would (should) lock them out and the chances they’ll guess correctly on the first try even knowing all the exact complexity requirements used is almost infinitesimal.

Conclusion

Is your LastPass master password truly the last password you’ll ever need? No. Your system password is still important to have and keep strong, I encourage people to encrypt their local disks (especially laptops) and use a unique and long passcode/PIN for decryption along with a TPM or USB key using something like BitLocker (which I’ll be covering in a future article) — this way to even get to your database the number of steps required are so many and complex I’d venture to say it’s bulletproof.

But if I can use LastPass to narrow down the number of passwords I’m required to recall on a daily basis down from the hundreds to around 5, and make the ones I don’t even want to remember anymore so complex that I couldn’t even if I tried, then I think it’s more than worth it.

Further Reading & Downloading

After Thought

Last night I stumbled on a deal where you can get a Yubikey and one year of LastPass for only $30, this normally would be $37. Nice little chunk of change. The even better deal is you can get two Yubikey and one year of LastPass for only $45. This is a $62 value. You can associate multiple Yubikeys with your account and then in the event your primary one is lost or stolen, you can dig your reserve key out of a safe location and remove the lost key, and then later replace the key.

Frank also pointed out to me last night something I neglected to mention. You can also deactivate the Yubikey requirement from a trusted computer such as your primary system that is in a secure location. A trusted system would obviously be one you’ve configured to bypass all of the security checks for your account. Right now I don’t have any systems where I bypass all of the checks, so I forgot to talk about it.

Something else I forgot to say, was that you can also disable the Yubikey through an email verification, but if your email password is protected by LastPass that may be harder to do. My LastPass account is on my iPhone as well so I could go that route to gain access to my passwords in the event of a failure. Again I forgot to mention it in the article but since you obviously can’t hook a LastPass USB token into an iPhone, you can setup pre-authenticated mobile devices to only require a passcode to unlock. Combined with a security lock on the phone, the phone itself becomes a sort of “token” you have to have to get in.

There are also other ways to perform multifactor against LastPass that don’t involve a YubiKey, including your own preconfigured key like what I mentioned, as well as a paper card you create that is unique to your account. I just think the YubiKey is the easiest and more secure way to go.


Originally published at techvirtuoso.com on December 29, 2010.