The one thing hurting your company’s quest for talent

Some tech companies attempt to impede the natural flow of talent by tying the hand of employees with non-compete agreements. … It’s not hard to see why some companies like them. The whole point of these agreements is to discourage employees from seeking greener pastures.

In truth, there is no free lunch. … Tempting though they may be, non-competes are bad for everyone they touch, employees and employers alike. … The bottom line is that non-compete agreements are bad for business. They are anti-competitive and anti-capitalist. … They reduce productivity, create labor market inefficiencies, depress wages and discourage innovation.

Non-compete? More like non-competitive.

It’s about the business model

Marco Arment:

… the reason I choose to minimize Google’s access to me is that my balance of utility versus ethical comfort is different. Both companies do have flaws, but they’re different flaws, and I tolerate them differently:

Apple is always arrogant, controlling, and inflexible, and sometimes stingy. Google is always creepy, entitled, and overreaching, and sometimes oblivious.

How you feel about these companies depends on how much utility you get out of their respective products and how much you care about their flaws.

Simply put, Apple’s benefits are usually worth their flaws to me, and Google’s usually aren’t.

Rene Riche:

Both Apple and Google have been stating their corporate goals with increasing frequency, including during their respective keynotes. Both are worth comparing and contrasting.

Apple’s is to make great products. Google’s is to organize the world’s data.

Expanded, that means Apple needs to enter categories where the company believes it can make a substantial contribution through really great products it can sell to a select segment of the market.

Google needs to convince everyone on earth to hand over all of their data so Google can organize it and make it accessible to everyone else on earth.

Apple funds its strategy by selling those great products at substantial margins. Google by selling advertising against, and intelligence obtained from, the data.

Everything Apple says and does on stage is designed to get you to give them money for a product, and to enjoy it so much you want to keep giving them more money for subsequent products.

Everything Google says and does on stage is to get you to give them more data, and to enjoy it so much you want to keep giving them more data.

Agreed.

Uber & Kansas

This afternoon, this image and rants from angry Kansans hit my Twitter timeline. I didn’t even realize prior to today, that it was even a thing, and when I saw Uber’s announcement my reaction was, immediately … I can’t believe I agree with Governor Brownback!

“As I said when I vetoed this bill, Kansas should be known as a state that welcomes and embraces innovation and the economic growth that comes with it. Over-regulation of businesses discourages investment and harms the open and free marketplace. Uber, and other innovative businesses, should be encouraged to operate, grow and create jobs here in Kansas.”

I don’t disagree. I want innovation, and I especially want it here in Kansas where I’ve lived for 31 years.

However…

I’ve read the law, it’s Kansas SB 117, and it’s just 8 pages. Nothing I’ve seen would prohibit Uber from doing business in Kansas. This isn’t a prohibition of ride sharing services. It doesn’t make unreasonable demands of drivers or the company that prohibit it from doing business. I’m not sure why I’m going to even try to defend the Kansas legislative branch, because I generally think they’re a bunch of Looney Tunes. However, I think Uber is playing social media users and rest of the media into a false narrative.

What if, and I’m just saying, the regulations the Kansas legislators passed were actually in the best interest of consumers… but not Uber? Is that such a bad thing?

Uber, like most companies, doesn’t want any regulation of their business that doesn’t actually benefit them. But every business has to expect some level of government scrutiny, even in an otherwise conservative state like Kansas. “Regulate commerce” is sort of a fundamental reason why we elect people into government.

In this case there were actually some things that would have probably been beneficial to Uber, such as prohibiting municipalities from adding additional prohibition or regulations. But, if their goal is no regulation they’re understandably annoyed with this law and in this case they’ve decided to take their ball and go home.

Full disclosure: I’ve never used Uber services, I’ve never needed to. That said, I don’t have issues with them or the business. I just don’t travel enough where I don’t have my own car or end up renting one for work. I think I’ve hailed a taxi twice in my life.

Here is my (non-lawyer) understanding of what this law does:

  • It defines Uber as a “transportation network company” for the purposes of Kansas law and now referred to here as a TNC.
  • It explains that TNC drivers are using their personal vehicles for ride sharing.
  • It specifically outlines that TNC drivers are not taxi services, private motor carriers, etc. Again this seems like it would be beneficial to Uber to have this codified in state law.
  • It would require Uber to register with the state as a TNC and pay an annual fee of $5,000. It would also require Uber to have an “agent” in Kansas.
  • It requires TNCs to disclose fare calculation prior to the ride, something Uber already does.
  • It requires TNCs to show the license plate and a picture of the driver in the app prior to the ride, something they already do.
  • It requires TNCs to provide an electronic receipt for the transaction, something they already do.
  • It requires TNC drivers to carry insurance, something they should already be doing. It does not require Uber to insure the drivers, but gives them the option to. It requires a $1m policy be carried by drivers. My understanding is this is the same requirement Uber already has.
  • It allows Kansas auto insurance providers to exempt coverage for TNC drivers from their auto insurance policies. I could see this being an issue, where drivers might have to obtain a different “business” policy. But this seems like the cost of doing business for the drivers.
  • It requires TNCs to conduct criminal background checks and prohibits drivers from having recent convictions for reckless driving, sexual assault, etc. which seems completely logical. It might be additional overhead, but the cost of this could be passed onto the drivers when they start.
  • It requires the TNC to have a zero tolerance policy for drivers who use drugs or alcohol while doing their jobs. This doesn’t seem like rocket science.
  • It requires the drivers to only accept prearranged rides via the app, you can’t “hail” an Uber driver from the street. This is pretty much the entire appeal of Uber, and not an issue in my mind.
  • It requires the TNC to have a non-discrimination policy with respect to riders, and to make accommodations for handicapped riders. This I found it shocking that Kansas would even care about something like this. I actually applaud them for this.
  • It requires the TNC to hold driver records for one year. I don’t know what Uber does in this respect now, but it doesn’t seem cumbersome given the amount of data these companies are holding already.
  • It prohibits the TNC from disclosing rider information to third parities without their consent. Again, nice.

In my mind, while all of this does indeed place restrictions on Uber doing business in Kansas, they don’t seem like unreasonable restrictions. Some of them almost conform exactly to Uber’s existing business model. But more importantly, the law actually appears to benefit and protect the consumer when it comes to security, discrimination and privacy.

It would be great to live in the libertarian utopia that many of the technorati want for their services, where innovation and market forces drive consumer protections. In the meantime reasonable government restrictions doesn’t seem like it requires Uber to pull their services completely.

I’m often critical of government attempts to protect intrenched interests, such as Tesla’s constant battle with states who prevent the company from selling cars directly to consumers, because the existing dealer/franchises don’t want that model in the states. I’m also not being critical of Uber as a service, or have any interest in maintaining the status-quo in terms of taxi cabs, etc.

I want Uber in Kansas, but at the same time I don’t think it’s unreasonable to set reasonable minimal expectations for doing business here.

RPA ‘Factory Reset’

I ran into a situation recently where the need arose to effectively “factory reset” an Generation 5 EMC RecoverPoint Appliance (Gen 5 RPA). In my case, I had one RPA where the local copy of the password database had become corrupted, but the other three systems in the environment were fine. There was nothing physically wrong with the box, I just wanted to revert it back to new and treat it like a replacement unit from EMC, and rejoin it back to the local cluster.

From what I could find, EMC had no documented procedure on how to do this. So after finding a blog entry and EMC Communities post (that individually did not help) here it is:

  • Attach a KVM to the failed appliance and reboot.
  • Hit F2 to boot into the system BIOS (the password emcbios).
  • Under USB settings, Enable Port 60/64 Emulation.
  • Save your settings and reboot the appliance.
  • This time hit Ctrl + G to enter the RAID BIOS.
  • Select the RAID 1 virtual drive and start a Fast Init.
  • Reboot the appliance.
  • Hit F2 to boot back into the system BIOS.
  • Under USB settings, Disable Port 60/64 Emulation.
  • Reboot the appliance and verify that no local OS is installed.
  • Insert the RecoverPoint install CD (the one you created after you downloaded the ISO from EMC Support and after you’ve burned it) and press enter to start the install.
  • The installation does not require any user interaction, your appliance will reboot when its competed into a “like new” status.
  • Rejoin the appliance to the cluster using procedures generated from Solve Desktop. (You can ignore instructions about rezoning fibre channel connections, or spoofing WWPNs, since none of this will have changed.)

The key points here are the bits about Port 60/64 Emulation. If you don’t do this, the RAID BIOS will load to a black screen and take you nowhere. Likewise, if you leave it enabled your RecoverPoint OS may not install correctly.

Bullish on the Watch

There has been a lot of noise about the Apple Watch recently. I’m planning on getting one, and am quite bullish on their future. Here are a couple of great posts I’ve seen on it this week…

From Ben Thompson, why the future is wearables that people actually want to wear:

It’s increasingly plausible to envision a future where all of these examples and a whole host of others in our physical environment are fundamentally transformed by software: locks that only unlock for me, payment systems that keep my money under my control, and in general an adaptation to my presence whether that be at home, at the concert hall, or at work.
To fully interact with this sort of software-enabled environment, I will of course need some way to identify myself; for all the benefits of the human body, projecting a unique digital signature is not one of them.

From Greg Koenig, a class in metallurgy, based on the production line videos Apple released:

Work hardening is one of those counterintuitive industrial processes where we take an undesirable aspect of a material and Judo it into a significant improvement. As the gold is cast into ingots, the crystalline lattice structure of the alloy is nearly perfectly aligned. What Apple is about to do is introduce — in a highly controlled and precise manner — defects in that lattice (known in the art as “dislocations”). The effect is to harden the material by giving future impact events or stresses a limited number of spots on the lattice to start (technical term: nucleate), and if they do start, very little room to propagate.
You can experiment with this yourself using a metal paperclip- start bending the paperclip back and forth and you’ll notice it gets ever so slightly more difficult to bend as you repeat the process. Eventually, you will create so many dislocations in the metal that the part will fracture into two pieces, but for a short period, you will have work hardened that section to a point where some potentially desirable material changes would have taken place. Add a tremendous amount of precision, equipment capable of applying thousands of tonnes of force and replace the paperclip with a US$50k ingot of gold alloy and you’re working at Apple.

Also, I’ve revised my sizing thoughts for my future purchase. Based on the built in sizing guide in the Apple Store app, I’ll probably end up purchasing the 42mm watch.

Apple Watch

After months of industry speculation, Apple today released pricing for the new Apple Watch. As a registered iFanboy, I’m legally required to purchase one. I wasn’t even sure when they were originally announced last fall if I’d want one, but I’ve come around.

However, I haven’t decided which one to purchase. Because I don’t have $10,000 sitting around, the “Edition” line is out. That leaves the stainless steel (starting at $549) and the aluminum versions ($349/$399) to choose from. Then it comes down to straps.

There are many obvious things to consider…

  • Material durability: I work in datacenters, I have small children, I occasionally go outside. Which one is going to hold up better under such abuse?
  • Fashion and personal preference: I like things that look nice. But I’m not flashy.
  • Face size: I have small wrists, and traditionally wear smaller faced watches. But would I like something bigger?

Then there are the, less obvious…

  • What cost am I willing to pay for a smartwatch?
  • What cost am I willing to try and convince my wife that she should let me pay for a smartwatch?
  • What cost is my wife actually willing to let me pay for a smartwatch?

Humm… decisions, decisions.

As it is, I’m leaning towards the 38mm Stainless Steel w/ Black Sport Band.

Blog Engineering

I spend a considerable amount of time and effort considering the infrastructure and engine that powers this blog, far more than I’ve ever spent contributing actual content.

Recently I’ve been considering a move from Ghost to GitHub Pages. It’s the hip thing to do these days. Scott Lowe moved his over last month, Jay Cuthrell moved his earlier last year. I’m sure there have been plenty more.

I’ve been playing with it for the last 24 hours or so. I can’t seem to decide if going to all the effort is worth it. I rather like what I’m using now (Ghost), it’s pretty simple, but with just enough features to do what I really need it to do. It seems like spending time moving away from it, for me, is sort of a solution in search of a problem. I already write in Markdown inside Ghost (required) and was doing so on previous platforms for this site including Octopress and Second Crack.

Might just stick with what works, and find more stuff to write about…

Giving fewer fucks

Pardon my language, or don’t. Last weekend in my Instapaper Weekly email, was a link to a fantastic article by Mark Manson called The Subtle Art of Not Giving a Fuck.

Take 12 minutes, and give it a read:

Most of us struggle throughout our lives by giving too many fucks in situations where fucks do not deserve to be given … Fucks given everywhere. Strewn about like seeds in mother-fucking spring time. And for what purpose? For what reason? Convenience? Easy comforts? A pat on the fucking back maybe?
This is the problem, my friend.
Because when we give too many fucks, when we choose to give a fuck about everything, then we feel as though we are perpetually entitled to feel comfortable and happy at all times, that’s when life fucks us.

As Mark points out, not giving a fuck doesn’t mean being apathetic, it means only caring about the things that really matter, and then not giving a fuck about what anyone else things in pursuit of that caring.

Ironically, the Instapaper Weekly email that came today included a mention about the language of Mark’s article, and an apology.

The top highlight in last week’s email contained some… colorful language, and we’re sorry if you were offended. The Weekly is an algorithmically generated newsletter based on the most popular articles & highlights saved by Instapaper users, and unfortunately we didn’t build the algorithm to filter profanity in any way. We’ve added in some filters on our end to ensure that future content remains as interesting as ever, while avoiding any potentially offensive language. Again, we are sincerely sorry if you were offended, we’re still getting the taste of soap out of our mouths!

Perhaps if those people who were offended, spent some time reading the article, they would have realized there are more important things to give a fuck about.

Avoiding potentially offensive language, what the fuck!?

Clone VM from snapshot

Have you ever wanted to easily clone a virtual machine from a snapshot, and have the clone reflect the source as it existed at that point in time, as opposed to the current status of the source? Jonathan Medd (@jonathanmedd) has a great PowerCLI script that I found yesterday, to do exactly this.

Copy the contents of his script into a new .ps1 file, save it, and then execute the script within a PowerCLI window to add the function to your session. Then run the new function to create your clones. By default it uses the last snapshot in the chain, but you can request a snapshot by name as explained on his site.

New-VMFromSnapshot -SourceVM VM01 -CloneName "Clone01" -Cluster "Test Cluster" -Datastore "Datastore01"

Bird Bath

This is the turkey brine receipe I’ve been using, adapted from this one by Traeger. The first time I used it, it was identical to their instructions but I’ve since boiled it down to what I consider the basics.

  • 20 cups of water AKA 5 quarts
  • 1–1/2 cups of kosher salt
  • 2 cups of bourbon
  • 2 cups of maple syrup
  • 1 tablespoon whole cloves
  • 2 tablespoons whole peppercorns
  • 6–8 cracked up bay leaves

Mix everything in a big pot. I start with the water and salt on high heat to get it broken down and then add the other stuff. Heat until just shy of a roaring boil. Let it cool off, you can add ice if you want to speed up the process, keep in mind it dilutes the brine.

Submerge and refrigerate for between 12 and 24 hours. (Don’t forget to remove the neck and gibblets from inside the turkey!)

When ready to cook, remove and rinse. Discard the brine since it’s a giant biohazard from having a dead animal float around in it for a day.

Slice up oranges and onions and insert them into the belly of the beast. Salt and pepper the exterior.

I smoke the turkey for 2 hours on the Traeger, and then cook at 350F for about 3 hours. Traeger has you use melted butter over the bird during cooking but I have realatives with milk allergies so that doesn’t happen. Make sure the thickest part of the bird reaches 165F with your meat thermometer. (Don’t rely on the little plastic popper.)

Remove from heat and let it rest for an hour before you do anything with it. When ready, remove the stuffing and discard, then slice and dice. Try not to eat all of it before you serve it to your guests.

Happy turkey day.

iCloud Photo Library, continued

My second day transferring my iPhoto library to iCloud Photo Library seems to be going very well. The “optimize storage” feature on the iOS devices is going to save users a ton of space.

Yesterday when I posted my last entry I had a 16GB iPad completely full (which was roughly 7GB of photos.) When I returned, all the photos had been uploaded to iCloud, and returned 5GB of space. No matter what I throw at this (and I have about 19GB of images in iCloud now) the devices sit around 2GB utilized for photo storage.

When photos further back in the catalog that are not currently on the device are accessed, they’re retrieved from the cloud in full resolution.

I’m only about 1/5th the way through my library. I’ve been doing it in chunks as I have time, because during the upload process I tend to fully saturate my 5Mb upstream home connection.

If you’ve not turned on iCloud Photo Library yet, even if you don’t intend to do as I’m doing and dump everything into it, you’re really missing out.

From iPhoto to iCloud Photo

When I saw the new iCloud Photo Sync demo at WWDC, I was in love.

Photo storage and syncing has been a struggle of mine for a while. I’ve bounced between external drives (which makes accessibility when I’m not at home difficult) and using local storage (which wastes expensive MacBook SSD space) … but never been happy. I’ve switched between Lightroom and Aperture for my “professional” images (AKA those taken when my Nikon DSLR) and mostly used iPhoto for my iPhone captured images.

The other issue was 16GB iOS devices fill up quick these days. So to save space, I would regularly sync my devices back to iPhoto and then delete the photos from my phone, but again, this made accessing older photos difficult when on the go.

With the convergence of getting better and better iPhone cameras that rival my 8 year old Nikon D200, and getting tired of paying for Adobe software updates, I eventually merged everything into iPhoto.

Now, with iOS 8.1, the iCloud Photo Sync beta rollout has begun, but only on iOS devices and via the iCloud website. The previously announced Mac app is slated for early 2015. But I want all my stuff in Apple’s cloud now, accessible on every device.

I figured out how:

  • Make sure you have iCloud Photo Sync enabled on your iOS devices.
  • Open iPhoto, open Finder > AirDrop on your Mac.
  • Open Photos on your iOS device.
  • Drag and drop photos from iPhoto to your iOS device of choice via AirDrop.
  • This triggers automatic sync to iCloud which starts dropping optimized versions all around the place.

I’m currently chugging back through May 1 of this year, which I only stopped there because that filled up my iPad with photos, and I want to see if after it uploads how it smashes the used space back down. I could keep going with my iPhone 6 that has another 40GB free, but this is enough experimentation for now.

I’ll also probably have to increase my 20GB iCloud plan to keep going beyond what’s in there now. Once I’ve got things moved off, I’ll be able to get my local copies moved back to external storage and then at some point once the Mac Photos app is released figure out how I want to deal with my local copies again.

I think my iPad will become central to future workflow for editing. I’ve long owned the camera connection kit, but never used it. Now it’s going to become the primary injection point of new images taken with the DSLR or editing ones taken with iPhone. (Especially now that Pixelator for iPad is here!).

iPhone 6, one month later

My iPhone 6. 4.7”, silver/white. 64GB. AT&T. This iPhone is the first iPhone I didn’t immediately open the box to feel it was the best one ever. I almost didn’t even order one. The 5 was fine.

I’ve owned it a month now. Originally I felt that I was going to drop it every time I tried to grip it (using my smaller than normal man-hands) — that panic led me to the Apple Store to pickup the black, leather Apple case. The case gave me a safety blanket and the ability to learn to adapt my grip, however, last Thursday I took the case away. It’s been a week since I’ve removed the training wheels.

I love this phone, it feels great. The size is perfect. The rounded corners feel great holding it for long periods of time. I’m also past fussing about the camera bulge. I worried it’d get scratched, now, in Apple(Care) and sapphire crystal, I trust.

I still find myself adjusting my hands a lot more than the 5 or 3G/4 to reach the entire screen, but I’m getting used to it.

iOS 8.1 has massaged the major issues I was having with the software. Battery life has been awesome, far superior to the 5. The ability to use the higher capacity chargers for quick refills is great. Apps are now being updated to take advantage of the increased real estate of the larger screen resolution, but there are still some stragglers. (I’m looking at you OmniFocus.)

Over all, solid purchase.

Encryption as a right?

Law enforcement officials usually play on our fears whenever their powers are limited, but those limitations are what keep our society from being a police state. The Supreme Court’s ruling in Miranda v. Arizona in 1966 led to catastrophic predictions that many criminals would go free and society would be harmed if all arrested people were informed of their rights. Didn’t happen.
That’s what’s happening here. Law enforcement types are suggesting that Apple and Google are making their products safe for child molesters. It’s the same old tired “good people have nothing to hide” argument against privacy rights that’s been carted out for years.

You have the right to remain encrypted.

View guide, ASLR, no more

A few months ago I wrote about the VMware View optimization script breaking Internet Explorer and Adobe Acrobat through the addition of a registry entry that disabled Address Space Layout Randomization (ASLR):

ASLR was a feature added to Windows starting with Vista. It’s present in Linux and Mac OS X as well. For reasons unknown, the VMware scripts disable ASLR.
Internet Explorer will not run with ASLR turned off. After further testing, neither will Adobe Reader. Two programs that are major targets for security exploits, refuse to run with ASLR turned off.
The “problem” with ASLR in a virtual environment is that it makes transparent memory page sharing less efficient. How much less? That’s debatable and dependent on workload. It might gain a handful of extra virtual machines running on a host, and at the expense of a valuable security feature of the operating system.
For some reason, those who created the script at VMware have decided that they consider it best practice for it to be disabled.

At the VMware Partner Technical Advisory Board on EUC last month, I pointed this out to some VMware people and sent a link to the blog entry.

Over the weekend I got a tip from Thomas Brown from over at Varrow:

Today I had an opportunity to download the updated scripts (available here) and was very pleased to see:

 rem *** Removed due to issues with IE10, IE11 and Adobe Acrobat 03Jun2014 rem Disable Address space layout randomization rem reg ADD "HKLMSystemCurrentControlSetControlSession ManagerMemory Management" /v MoveImages /t REG_DWORD /d 0x0 /f

Success!

As always, please review the rest of the contents to make sure the changes that the script makes are approprate for your environment.

Microsoft said to announce job cuts as soon as this week

Engineering teams have traditionally been split between program managers, developers and testers. Yet with_ new cloud methods _of building software, it often makes sense to have the developers test and fix bugs instead of a separate team of testers, Nadella said in the interview last week. Some of the cuts will be among software testers, said one of the people.

I’m not a developer, but how does “the cloud” change the dev/test process in this way?

And as Rick Scherer pointed out this morning:

(BTW, none of this has anything to do CloudShark, the product, that is actually a pretty neat way of storing and sharing packet captures.)

Cisco Meraki, CMNA

Wednesday I had the chance to spend the whole day soaking in knowledge. Always a welcomed event. This time it centered around Cisco Meraki.

As an employee of a Cisco Premier partner (AOS), and a current CCNA, I was able to attend this one day boot camp on Meraki and earn their Certified Meraki Networking Association (CMNA) designation.

Other things you get for attending the class:

  • CMNA polo shirt
  • MX60 security appliance
  • MS220–8P switch
  • MR26 wireless access point
  • Lunch

Lunch was delivered today. The shirt and kit get shipped to me, and I can’t wait to get my whole home network setup on it and really start playing.

July 2014, on Twitter

I’ve been trying to determine the best way to link the blog and my Twitter account together. Obviously I tweet links to much of what I post here, but I tweet far more often than I blog. There are usually lots of good nuggets that I find, either links to other blogs, KB articles, or even just retweeting insights.

As an experiment I’m going to start putting together a little digest with some extended comments from me on them. Sometimes they’re the most popular things I’ve shared, sometimes the things I’ve found the most interesting, or sometimes just my failed attempt at being funny. So here we go.

Take note of the Dell PERC H310. This is in no doubt response to the VSAN Day from Hell that one unlucky user experianced a few weeks ago. The low queue depth on these cards prevent VSAN from performing as it should. It was good of VMware to yank these to prevent further issues, but frustrating that it wasn’t accounted for prior to rollout.

Kids say the darndest things.

I failed. Second attempt is August 2.

I do what I can to raise my children right.

It’s actually worked out really well, so far. I need to get into setting up LDAP and vCenter authentication for it. One less reason to have anything Windows running in the home. I also switched the home firewall from Untangle over to pfSense. This will change again soon once I get my Cisco Meraki firewall, switch and access point for the house. (Doing partner level CMNA training this Wednesday.)

This shit still has me pissed off, and logged off of Facebook on most of my devices.

Any encryption is good, but I’m not exactly jumping up and down with excitement about it.

Last but not least, what a mess this upgrade was. I won’t get into details since it involves a customer environment, but it was a stressful couple of days when we discovered Smartnet didn’t get renewed on their UCS environment as expected, after the FI was totally bricked. Our company was able to get it resolved with Cisco and they had a new one in the rack and running less than 24 hours later, still not fun.

Cisco Jabber & Persona Management

I just got finished with a customer issue who had deployed Cisco Jabber along with VMware View, using Persona Management and floating desktops set to refresh at logoff. Much to their annoyance, users would have to reconfigure their Cisco Jabber client with the server connection settings and any client customizations made were lost after logging back in to the desktops.

After looking into this, what it looked like was happening was that the Jabber configuration XML files were not being sync’d down to the local PC before the Jabber client was launching and this was causing the settings to default back to a non-setup state. Even though the configuration data stored in jabberLocalConfig.xml was saved to the Persona Management share it never had a chance to get loaded before it was overwritten.

The issue was resolved by adjusting Persona Management group policies to precache the settings stored on the persona share to the virtual desktop before completing login.

Modify the Persona Management GPO setting “Files and folders to preload” to include the following directory:

AppDataRoamingCiscoUnified CommunicationsJabberCSF

Server settings, custom adjustments to the client are now maintained across desktop sessions. WIN!

VCAP-DTA Exam Experience

Yesterday I sat for the VMware Certified Advanced Professional in Desktop Administration exam. While I would love to tell you that I passed, sadly it seems I will be sitting for the exam again soon.

For some reason I thought it would be a good idea to take the exam at 8AM on a Monday morning, and then not study. Add in staying up late on Sunday night to watch World War Z on Netflix and you’ve got a receipe for a rough morning.

But enough excuses…

I did read the exam blueprint, as with every certfification exam this is the best starting place to find out what will be covered. In order to save myself some time I’m going to plagerize what I wrote a few months ago after taking the VCAP-DCA exam to help explain the format for the test.

For the uninitiated, the test is unlike any other exam in the VMware portfolio, and unlike any other exam I’ve taken for any other certification. It is 100% lab based. You have remote access to a VMware vSphere 5.0 environment, with a vCenter, two hosts, a collection of virtual machines, and pre-provisoned storage.

In other VMware exams, you’re given 60–70 multiple choice questions to regurgitate anwsers to. In the VCAP, you are given 26 different “projects” you have work your way though. I say projects because each of the 26 will vary in length and have multiple component problems to solve. Some may be straight foward, some far less so.

In the case of this exam, the environment has more hosts and a newer version of vSphere. There are also 23 projects instead of 26. The rest of it still stands.

You start with 180 minutes, and half way through I thought I was making great progress, but then I ran out of time. I did feel like I was spinning my wheels a bit with lag back to the enviornment from the testing center, and there was a lack of clear direction about the environment and in some of the questions.

The last time I totaled up the number of View deployments I’ve either deployed soup to nuts, or done significant upgrades and management of since 2009, it was somewhere around a couple dozen. Even with that experiance, there were a couple of things on the exam that I’ve never had to do in my work and then plenty of things I was expecting to have to do that never came up. Overall though, it ran a pretty good swath of knowledge.

I’ve rescheduled for Saturday, August 2 at 10:30AM, not because I wanted to wait this long to retry but because that was the first time they had an opening that fit with my schedule.