AT&T Mobile Share “Advantage”

You know you have a problem when you get excited about plan changes on your cellular provider. Yesterday, AT&T gave me a problem.

Initially, no more data overages, higher caps, and reduced pricing tiers looks like good news all around, but is that really true? After looking at the details of these new AT&T data plans I’m less than impressed. They’ve upped the per device access charge from $15 to $20.

Right now I have the $100 plan for 15GB, plus three devices, for a total of $145. Under the new plan, if I move to the similar the 16GB plan the base price is $90 but I’m now paying $60 in per device charges for a total of $150. (+$5)

Even going from the 15GB down to the new 10GB plan, would result in a savings of only $5, at the loss of 5GB of data.

Maybe.

It’s still really a bit confusing, the press release says “All Mobile Share Advantage plans also have an access charge of $10 — $40 a month per device not included in prices shown above.” but then later “customers will pay a $20 access charge per smartphone a month for Mobile Share Advantage.”

My hope/guess is that it’s likely to depend on which plan you pick, at least that’s how it is on the current setup. I believe the current $15 per device does jump to like $20 or $25 , on their current plans. So, if it continues to be a graduated scale, the new 16GB plan may actually be a money saver, but until their pricing calculator shows up when the new pricing is available on Sunday, we probably won’t know.

But from the “clear” statement, it looks like not a great deal. For now, they get a splashy headline. Verizon, who has a similar plan, has the separate fee for allowing “unlimited” reduced bandwidth, instead of charging an overage, and it seems like this increase is just a clever way of hiding that fee.

If I was really concerned about overages I’d probably just do it, but I never go over.

Cisco to audit code in wake of Juniper backdoor

In the wake of an announcement by Juniper that after an internal code audit, they had uncovered two backdoors in the operating systems used in their NetScreen firewalls, Cisco has announced that they’re taking similar steps to perform an audit of their code.

In a blog post by Anthony Grico, Senior Director of the Security and Trust Organization within Cisco, the company outlines that although their normal development practices should detect unauthorized code from sneaking into their products, no process can eliminate all risk. The company will be conducting penetration testing and code reviews.

The company also says that there has been no indication that any code has been compromised, that the review was launched as a proactive effort in the wake of Juniper’s bulletin, and also not in response to any outside request.

It’s generally acknowledged by security experts that due to the level of sophistication of such attacks against companies like Cisco and Juniper, it’s likely state agencies are responsible for the unauthorized code; the Chinese military, the US’s NSA or the UK’s GCHQ. The NSA had an operation exposed by Edward Snowden in which they intercepted Cisco products, mid-shipment, that were destined for other countries, to install backdoor code directly into those routers, firewalls, etc.

However, it may be also be less sophisticated attackers (or governments) who are using existing backdoors. Matthew Green, a cryptographer and professor at Johns Hopkins University, has theorized that the Juniper VPN decryption vulnerability may have been the result of Juniper’s implementation of an altered version of the NSA’s backdoored Dual EC random number generator. As Green explains, encryption depends on unpredictable random number generators, and the Dual EC method that has been advised by the National Institute of Standards and Technology (NIST) since the early 2000s was discovered by researchers to include a (probably NSA inserted) weakness that allowed an attacker to decrypt intercepted traffic.

Juniper utilizes Dual EC, but in a non-standard way so that the (NSA) backdoor was removed. However, researchers who decompiled Juniper’s firmware packages, have compared the differences in the compromised code and found that the compromised sections altered the number generator so that anyone with knowledge of the effected code, could again decrypt traffic.

In effect, the attackers used an existing (closed) door to open a new one for their own use.


Originally published at www.petri.com on December 29, 2015.

Nutanix files for IPO

Nutanix announced on Wednesday that it has filed a Form S-1 with the SEC for a proposed IPO.

The number of shares being offered and the price of the offering have not yet been determined, although the company says it intends to raise a maximum of $200 million; Nutanix will be listed as “NTNX” on NASDAQ.

Nutanix specializes in hyper-converged infrastructure that merges the traditional silos of the physical server, virtualization hypervisor and storage into one integrated solution. It competes in that space with companies like SimpliVity, EMC, and VMware’s VSAN.

Their solution is comprised of two product families, Acropolis and Prism, and is delivered on commodity x86 servers; Acropolis is their in-house hypervisor software. That’s a unique selling point in this market, in which most hyper-converged providers normally resell VMware’s ESXi platform, or in the case of VMware VSAN, are delivered by VMware itself. Nutanix originally, and still, allows customers to utilize the VMware hypervisor if they choose, instead of Acropolis. Prism is their virtualization and infrastructure management platform.

In addition to selling their own Nutanix branded systems (built by Super Micro), they also partner with Dell, who resells the Nutanix platform as their “XC-series” systems, built on Dell hardware. Dell recently announced its intention to acquire EMC, which may sour that partnership in the future.

According to Nutanix, as of October 31, 2015, they have 2,100 end-customers including enterprises customers like Activision Blizzard, Best Buy, Kellogg, Nasdaq, Nintendo, Toyota, Yahoo and the, U.S. Department of Defense.

Nutanix, which began sales in 2011, has posted revenue growth over the last three years, growing from $6.5 million total revenue in 2012 to $241.4 million for 2015. Nutanix has hired an impressive number of virtualization industry big-wigs, at this time employing more (expensive) VCDX certified engineers than any other company, and invested a lot in research & development, and marketing. They currently have a total 1,368 in headcount. However, as a result of these investments, Nutanix also posted a loss of $126.1 million for 2015.

In their Series E funding round last August, the company raised $140 million on a $2 billion private valuation. As of October 31, 2015, they had an accumulated deficit of $312.0 million.


Originally published at www.petri.com on December 23, 2015.

NetApp to purchase SolidFire

NetApp, Inc. on Monday announced its intent to acquire the Boulder, Colorado based all-flash array (AFA) vendor SolidFire, for $870 million in cash.

According the to announcement, NetApp intends to incorporate SolidFire’s products into NetApp’s existing product lines. Following the close of the transaction, SolidFire CEO, Dave Wright, will lead the SolidFire product line within NetApp.

NetApp will continue to push their existing all-flash offerings to the three largest AFA market segments, with their existing lines targeting the enterprise, and SolidFire focused on next-generation cloud and “webscale” architectures.

CRN had reported earlier in the day that an announcement was coming, and in their reporting said that Cisco and Samsung had also been interested in picking up SolidFire. SolidFire had raised around $180 million in funding since it was started in 2009 and launched its first product in late 2012.

SolidFire competes in the AFA segment against other market leaders like EMC’s XtremeIO, Pure Storage and Tegile. The company’s main selling points are a robust storage quality of service (QoS) offering that allows service providers to carve up and guarantee a level of performance for customers, an application programming interface (API) that enables administrators to program against or script any functionality within the system, and a scale-out architecture that uses traditional iSCSI and Ethernet.

NetApp expects the transaction to be completed during the fourth quarter of its fiscal year, 2016.


Originally published at www.petri.com on December 21, 2015.

Juniper finds backdoor exposing encrypted VPN traffic

In a security advisory posted late Thursday, Bob Worrall, Juniper Network’s Chief Information Officer, announced that the ScreenOS software used on the company’s NetScreen firewalls contains an unauthorized backdoor allowing third parties to potentially monitor encrypted VPN traffic.

“During a recent internal code review, Juniper discovered unauthorized code in ScreenOS that could allow a knowledgeable attacker to gain administrative access to NetScreen devices and to decrypt VPN connections. … At this time, we have not received any reports of these vulnerabilities being exploited,” Worrall wrote.

Juniper says that ScreenOS versions 6.2.0r15 through 6.2.0r18 and 6.3.0r12 through 6.3.0r20 are affected should be upgraded immediately to either 6.2.0r19 or 6.3.0r21, as there are no workarounds to disable access. Juniper also says they have no evidence that the their products running their Junos operating system are impacted by this breach.

In another knowledgebase article, Juniper explains what type of logged event may appear on a compromised system, but warns that a skilled attacker would likely be able to clean his tracks and remove the events from the logs.

While it’s not clear who is responsible or how this backdoor was added to the code, many security experts point to a 2013 article published by Der Spiegel that said an NSA operation called FEEDTHROUGH worked specifically against Juniper firewalls and gave the agency persistent backdoor access.

The NSA also had an operation exposed by Edward Snowden in which they intercepted Cisco products, mid-shipment, that were destined for other countries, to install backdoor code directly into those routers, firewalls, etc. However, unlike that operation, if the NSA were to be responsible for the Juniper backdoor, this exploit would be present on any ScreenOS hardware around the world, including within the United States.


Originally published at www.petri.com on December 18, 2015.

Pearson VUE’s credential management system has been compromised

Pearson VUE, who manages the certification programs for a large number of IT vendors like Cisco and EMC, has announced that their credential system has been the successful target of an attack. The attackers were able to compromise and access information related to a subset of users.

The company says that the hack is limited and does not impact the integrity of the testing system, K-12 assessment testing, or other systems. The company is still assessing the scope of the damage, but they do not believe that vital information such as Social Security or credit card payment information was compromised; Pearson VUE is working with law enforcement and forensic experts to assess the damage.

While the investigation progresses, access to the credential system is offline.

Various sources have reported that many of the credential management systems that Pearson VUE manages have been offline for the last few days, with the company finally making an announcement on Monday.

In a blog post, Cisco (who uses the PCM platform to track members of the CCNA, CCNP and CCIE programs) explains they believe that the leakage is limited to the holders name, mailing address, email address and phone number.

While you may see reports of additional types of personal information being potentially compromised on the PCM platform, we have been informed that this is not the case with respect to the Cisco certification user profiles.

— Chris Jacobs, the director of Cisco’s certifications program.

Testing for vendor programs, like Cisco, that are impacted will continue while access to the tracking system is down. Pearson VUE has not given any timeline for when access to the tracking system will be available again; the company is offering identity protection to affected candidates for one-year at no cost.


Originally published at www.petri.com on November 25, 2015.

EMC announces Data Lake 2.0 strategy

Data Lake 2.0 is the next generation of the EMC Isilon portfolio. Isilon is EMC’s scale-out network attached storage product. Traditionally, Isilon OneFS runs on physical nodes, with the cluster scaling from roughly 30 TB of raw capacity, all the way up to 50 PB. The nodes are all connected across a redundant, private, Infiniband network. But next year, EMC will offer two more ways to utilize Isilon. In addition to the traditional setup, EMC will offer “Cloud Pools” and “IsilonSD Edge” products.

Software Defined

The IsilonSD Edge product is the eqivilant of an Isilon virtual edition. Instead of running Isilon’s OneFS operating system directly on EMC provided hardware, customers can utilize their own physical boxes, loaded up with disk, and run the Isilon software as multiple instances inside VMware ESXi.

There are some restrictions though, chiefly, the ESXi host operating systems must meet strict specifications. EMC will leverage the hardware compatibility list used by VMware’s VSAN product, to determine what will be a supported IsilonSD configuration. Each IsilonSD virtual node will have VMDK files running on the local storage of the ESXi hosts. Shared storage (even one provided by another EMC storage system like the VNX or VMAX) is not supported. Even though IsilonSD and VSAN will share the same HCL, it should be noted that IsilonSD does not leverage VSAN’s technologies in any way. The VSAN team has done extensive work with testing various storage controllers, solid state, and hard drive brands, so it makes sense for EMC to lean on their work.

IsilonSD is intended for small, remote or branch office, and it will be limited in that it won’t scale-out like its traditional Isilon. Like traditional Isilon, IsilonSD requires at least three instances to create a cluster, but is limited to a maximum of six VMs. Traditional Isilon can scale to 144 nodes (the largest Infiniband switch on the market has 144 ports.) IsilonSD is also limited to 36TB of raw capacity in the cluster.

IsilonSD comes in two licensing models. A fully licensed (and, importantly) EMC supported configuration, and a free edition.

Cloudy, with a chance of RAIN

CloudPools allows administrators to leverage off site “cloud” disk targets as storage for your files. Traditional Isilon has three tiers of disk/node types; high performance all solid-state S-nodes, general performance SSD/disk X-nodes, capacity focused disk based NL-nodes, and high density deep archving HD-nodes. Now you can think of your cloud storage target as the super-cold target for your files. CloudPools leverages rules to determine what type of data, or at what age, files are moved between tiers, or off-site.

End users will have no knowledge of where the files came from, but may see the latency associated with having to retrieve files from off-site instead of from disks located in the company data center. Administrators don’t have to manually move data between on-site or the cloud, as the tiering is automatically done through pre-set policies. Files sent to the cloud are encrypted both in transmission and at the target cloud, and then decrypted as they arrive back on the on-site Isilon cluster.

CloudPools will be able to leverage both public and private clouds offerings. Supported public clouds instead Amazon Web Service S3 and Microsoft Azure; support for VMware vCloud Air is intended for a future release. Private cloud offerings are limited to EMC’s Elastic Cloud Storage solution.

All of this forms EMC’s “Data Lake” — an edge to cloud file storage strategy. IsilonSD Edge puts big data in remote locations, and makes it easily accessible and consumable to end users, with support for Isilon’s SyncIQ replication technology to keep a copy back in the data center for long term archiving, backup and disaster recovery. From there, data can be moved out to a cloud provider as files age out, to keep the speedy access available for more frequently used data.

EMC IsilonSD Edge, Isilon CloudPools, and the Isilon OneFS.Next version that will enable these functions is slated for availbility in early 2016.


Originally published at www.petri.com on November 17, 2015.

IBM acquires Gravitant to expand hybrid cloud offering

IBM announced that it has acquired Austin, Texas based Gravitant, a company that develops software to enable businesses to manage and purchase cloud services from multiple suppliers, and to create mixed environments of private and public clouds.

Gravitant’s software, called cloudMatrix, allows users to quickly compare capabilities and pricing from multiple vendors, and then provision those services, through a single console. Gravitant competes with companies like RightScale and Enstratius. Like most incumbent technology vendors, IBM has been trying to boost its cloud services through several different acquisitions, where the company has purchased SoftLayer, Cloudant and Cleversafe in the last two years. It also purchased The Weather Company, last week for $2 billion.

But the purchase of Gravitant drives home the point that the incumbent vendors still believe that a hybrid cloud approach is the right choice for most enterprise customers.

The reality of enterprise IT is that it is many clouds with many characteristics

— Martin Jetter, IBM’s SVP of Global Technology Services

cloudMatrix can also be used by solution providers, and IBM plans to utilize the software in their own SaaS offerings.

Gravitant was founded in 2004 as an IT consulting company, but pivoted in 2009 to become a product company. cloudMatrix was their first product, released in late 2011. Terms of the deal were not disclosed.


Originally published at www.petri.com on November 5, 2015.

VCDX defense process drops troubleshooting questions

Those who are looking to obtain the highest certification level in VMware’s portfolio, watch out, the company announced in a blog post by Chris Colletti that they’ve made an adjustment to the process.

Gone is the final part of the defense, where in the last 15 minutes candidates would be given hypothetical troubleshooting scenarios. Instead, the time has been assigned to the ad-hoc design session. Additionally, the VCDX-Cloud and VCDX-DT scenario times have both been increased to 45 minutes to match VCDX-DCV and VCDX-NV timelines, for consistency.

Colette, who is currently a Principal Architect and VCDX Evangelist at VMware, explained that these changes have long been in discussion with the VCDX Advisory Council members and with many of the current VCDX panelists.

Previously candidates would defend one of their own designs that had been pre-submitted, vetted, and then invited to defend in front of a panel of veteran VCDX holders. Then the ad-hoc design session, followed by troubleshooting. This is after the candidate obtains multiple prior VMware certifications of VCP and VCAP/VCIX. The defense can only be done at pre-scheduled events such and usually involves a trip to Palo Alto, or another VMware corporate location. The process is somewhat unique in the industry.

Reaction from current VCDX holders in the community has been mixed on social media, but trending mostly positive.

Last week VMware announced a new crop of VCDX holders, bringing the total up to 213. The next defense is November 9, for VCDX-NV candidates, and February 15, for VCDX-DCV. Applications for the February defense are due by December 11, 2015.


Originally published at www.petri.com on November 4, 2015.

Hewlett Packard Enterprise goes public, splitting HP into two companies

On Monday, Hewlett Packard Enterprise (HPE) Chief Executive Officer Meg Whitman, as well as partners and customers, rang the opening bell at the New York Stock Exchange, and with it the long planned separation of the HP’s consumer and enterprise businesses became official.

Going forward, HPE will focus on infrastructure, servers, networking, services, software, and financial services. HPE projects annual revenue for the new company to be $53 billion; HP Inc will sell personal computers and printers, and be run by Dion Weisler.

Wesiler was previously the Executive Vice President of Printing & Personal Systems under the combined company, and Whitman, was CEO.

The split is expected to cost nearly 2 billion dollars, and was originally announced back in October of 2014. HP has also shed nearly 50,000 jobs through the process. Since Whitman took over as CEO in 2011, HP has cut nearly 85,000 from its workfoce. Since the announcement, HP stock lost nearly 1/3 of its value but on the first day of trading HP Inc (which now trades as HPQ) jumped 13 percent, while HPE dropped 1.6 percent.

In an interview with Re/code, Whitman said HPE would have around $5.5 billion in cash on hand, which she said is planned to use for strategic purchases, and cited the recent $3 billion purchase of Aruba Networks as an example of the kind of acquisitions she wanted to make.


Originally published at www.petri.com on November 3, 2015.

Flash zero-day, again

Symantec has confirmed the existence of a new zero-day vulnerability in Adobe Flash which could allow attackers to remotely execute code on a targeted computer. Since details of the vulnerability are now publicly available, it is likely attackers will move quickly to exploit it before a patch is issued.

I have been limiting my exposure to Flash for a while.

  • I use Safari as my daily browser. Flash is not installed directly on my Mac.
  • For anything that needs Flash, I use Chrome, where it’s integrated with the browser and automatically updated by the Chrome update process. It’s set in “Click to Run” mode, so it only activates when I let it.
  • In my Windows 10 VM, Flash is completely disabled in Microsoft Edge and Internet Explorer. It does have Java enabled, but for reasons beyond my control. (EMC and Cisco)

Now I just need VMware to quit writing every new web interface as Flash dependent.

Microsoft said to announce job cuts as soon as this week

Engineering teams have traditionally been split between program managers, developers and testers. Yet with_ new cloud methods _of building software, it often makes sense to have the developers test and fix bugs instead of a separate team of testers, Nadella said in the interview last week. Some of the cuts will be among software testers, said one of the people.

I’m not a developer, but how does “the cloud” change the dev/test process in this way?

And as Rick Scherer pointed out this morning:

(BTW, none of this has anything to do CloudShark, the product, that is actually a pretty neat way of storing and sharing packet captures.)

Huawei is calling it quits in the United States

Probably because most US businesses were not to excited to base their infrastructure on the technology of a company that stole Cisco code, and is run by former members of the Chinese military. Not too long ago Sprint and Softbank had to agree to a request by the US intelligence community to rip and replace any Huawei equipment they had on the Sprint network as a condition for their upcoming merger.

I’ve actually only heard first hand of one my customers with Huawei devices in production, and oddly enough it was their storage system (which until that point I didn’t even know they had a hand in.) Unfortunately I didn’t get a chance to see what it looked like.

Oracle VM 3

Oracle VM 3 improved a lot, they are not close to Microsoft or VMware, but it is pretty good if you are not trying to do dramatic things like moving virtual machines around.

Gartner’s vice president and distinguished analyst Thomas Bittman, talking about how Oracle VM is poised to be the real competitor for VMware in the future. Not Hyper-V. Not Xen. I’m not one to really defend Microsoft or Citrix, but… have you ever actually seen Oracle VM running on a production system?

svMotion will rename underlying folders/files, once again

VMware has released Update 2 of vSphere 5.0, and among the fixes is one that should stand out as correcting the loss of a nice feature. Performing a Storage vMotion of a virtual machine will once again rename the underlying folder and VMDK files associated with the machine.

vSphere 5 Storage vMotion is unable to rename virtual machine files on completing migration

In vCenter Server, when you rename a virtual machine in the vSphere Client, the vmdk disks are not renamed following a successful Storage vMotion task. When you perform a Storage vMotion of the virtual machine to have its folder and associated files renamed to match the new name. The virtual machine folder name changes, but the virtual machine file names do not change.

This behavior was present in the 4.x branch, and was annoyingly removed from 5.0. Thankfully, VMware dropped it back in. Note that this feature is still missing from vSphere 5.1, but can only be assumed that it will be added in a future update release.

Samba, more than SMB

Samba 4.0 comprises an LDAP directory server, Heimdal Kerberos authentication server, a secure Dynamic DNS server, and implementations of all necessary remote procedure calls for Active Directory. Samba 4.0 provides everything needed to serve as an Active Directory Compatible Domain Controller for all versions of Microsoft Windows clients currently supported by Microsoft, including the recently released Windows 8.

This is certantly interesting, I didn’t realize this was in development, but the latest version of Samba (the popular free open source implemenetation of the Windows file sharing protocols) now lets you run a true open source equal to Microsoft Active Directory.

If someone were to bundle a Linux OVA file with this new Samba domain controller code, along with the vCenter Server Appliance (based on Linux), and the VMware Web Client, the reliance on Microsoft Windows servers to have a functional VMware environment is eroding.

Steve Jobs

One hundred years from now, people will talk about Steve Jobs the same way we do of Alexander Graham Bell, Thomas Edison, Henry Ford and the Wright brothers. Perhaps, as my friend Chris helped pointed out, he was a mix of Edison and John Lennon. Maybe he was a bit like Walt Disney, or Jim Hensen, a man who was personally tied to the brand he created.

Regardless, he was an an inventor, a visionary, a man full of ideas. He was more than just any businessman, CEO to Apple, he personally held patents for many of the technologies used in their products. He was the perfect mix of creative genius and salesman. In the tech world, Steve Jobs was elevated to near deity-like status, but as cancer proved, he was still just a man.

Every CEO of every company on the planet should pay attention to this right now and ask themselves, “why won’t this happen when I die?” (@jayfanelli)

I tried to sit down and put together my thoughts on his passing last night, but couldn’t. I was too overcome with the emotions pouring out from people across the world on Twitter. I shared some of my own but it was interesting to watch the wake for a man happen in real time from people all across the world. People who loved and hated him all had emotions to share.

Even President Obama had something to say:

The world has lost a visionary. And there may be no greater tribute to Steve’s success than the fact that much of the world learned of his passing on a device he invented. Michelle and I send our thoughts and prayers to Steve’s wife Laurene, his family, and all those who loved him.

But I’m not sure those outside of the technology community could really feel the impact the way we all did. My wife didn’t understand last night why I was grieving for a man I’d never met, the founder of a company that now rivals ExxonMobil as the world’s largest. Without meeting him, Steve Jobs had a profound impact on my life. I credit him (and Bill Gates) for sparking my interest in technology… for making me what I am today.

The first computer I ever used was an Apple II when I was in kindergarden. Later, I learned how to do amazing things on some of the first Macintosh systems. I used to skip recess to go down to the elementary school library so that I could learn on devices that he helped create. And while my family can attest to later holding Apple and their products in contempt through much of the mid-90s, while pounding the drum of Microsoft, I later came back to the “distortion field” as Steve brought real innovation back to the industry.

The Apple II, the Macintosh, Pixar (who doesn’t love Toy Story), iPod, iPhone, iPad, iTunes. Disruptions to the status-quo. Disruptions that are all because of the leadership and creative mind of Steve Jobs. I don’t remember much about what computers were like before the Apple II or the Mac, but I know what movies were like before Pixar. I know what buying music was like before iTunes and the iPod. I know what phones were like before the iPhone, and I love my iPad. I wouldn’t want to go back to a world before the things Steve created, existed. Even if you’re a hardened Android fan, you have to remember what smartphones were like before the iPhone and thank Apple and Steve Jobs for setting a new trend. Even if you’re a Microsoft fanatic, you have to thank him for keeping Bill on his toes for all those years, and forcing each other to continue to innovate.

In my article last week, prior to the announcement of the iPhone 4S, I said this:

I still maintain that Steve Jobs will be present at the announcement, even after his recent retirement as Apple CEO. I think he will be there to hand it off to Tim Cook in some way, or perhaps participate in some FaceTime chat to highlight a new iOS 5 feature. At the very least, his presence will be felt.

There was an empty chair, in the front row of the hall, with a cloth wrapped around it marked Reserved. That was no doubt a chair for Steve, one he wouldn’t be in because of what we all now know. I think Apple knew this was coming soon, and probably played the announcement a bit low-key as to not attempt to overshadow what could have probably happened any day. That said, I have no doubt that Steve wanted to see one last keynote, one last product launch, before he passed on. His presence was felt. His presence will continue to be felt with every future Apple product.

At 56, Steve Jobs did more than most people do in 90 years. He was the original Apple genius, a master showman, and the original tech virtuoso. He will be missed.

The mythical Verizon iPhone has arrived

Somewhere deep in the heart of the AT&T headquarters, their executives are huddled around holding a vigil to mourn the loss of the exclusive US contract. Likewise, Google execs are probably throwing chairs at the wall screaming “I thought we had something special!”

No longer a mythical unicorn, the much anticipated Verizon iPhone is now a reality. Available February 3 for existing Verizon customers (props to them for that) and then February 10 for everyone else.

The new device is almost exactly like the old one except for some small differences:

  • CDMA radio instead of GSM, this also means a slightly altered external antenna design
  • Support for Verizon Mobile Hotspot, allowing 5 devices to connect to the iPhone and use Verizon’s data service

There are a few of differences with Verizon and AT&T that should be pointed out:

  1. Verizon’s data network is larger, meaning more bars in more places.
  2. AT&T’s data network is faster, meaning when you get service you’re going to cruise faster.
  3. CDMA technology doesn’t allow for simultaneous voice and data usage. If you’re on a call and want to look up on Google Maps where to meet your friend for lunch? Too bad. Gotta wait for your call to end.

The biggest disappointment, but not unexpected, is that the Verizon iPhone will not support LTE technology, which would have allowed for faster data transfers and simultaneous voice and data. However, given that Verizon’s LTE network just started rolling out a few months ago, this isn’t surprising that Apple chose not to support it. It would have also required further alterations to the iPhone.

The unknown right now is what version of iOS this new CDMA iPhone will run. Will the iOS 4.2.1 guts support it? Will it require a 4.2.2 update? Will we get 4.3? Will the GSM and CDMA phones run the same iOS version? Or will it all be some sort of carrier update that doesn’t involve the a new version of iOS?

Last, Apple COO Tim Cook left the door wide open to future networks when he said this contract with Verizon is multi-year but non-exclusive.

Let the Sprint iPhone discussion commence.

(Or T-Mobile, if anyone still cares about them.)

Updated: It seems that the new iOS version will be 4.2.5, via Engadget who got to play with one after the announcement.


Originally published at techvirtuoso.com on January 11, 2011.

Google stripping support for H.264 video out of Chrome

In a surprise announcement on the Chromium Blog today, Google announced that they would be phasing out H.264 support from the Google Chrome web browser, in favor of the open sourced WebM standard. The announcement further muddies the waters of HTML5 video support.

To that end, we are changing Chrome’s HTML5 <video> support to make it consistent with the codecs already supported by the open Chromium project. Specifically, we are supporting the WebM (VP8) and Theora video codecs, and will consider adding support for other high-quality open codecs in the future. Though H.264 plays an important role in video, as our goal is to enable open innovation, support for the codec will be removed and our resources directed towards completely open codec technologies.

What is unclear is how Google can on one hand say that their goal is to enable open innovation, and yet still justify bundling the proprietary Adobe Flash plugin with Chrome.

The biggest supporter of H.264 in HTML5 video comes from Apple, which uses it in Safari, specifically on the iPhone, iPad and other iOS platform devices. Because Steve Jobs doesn’t like to run Flash unless he’s had a few drinks first, and even then only with protection, there is no Flash support on any iOS device. If WebM were to take off, Apple would need to act to incorporate support or leave millions of iOS users unable to load most web video sites.

However, the chances of a clear winner emerging from all of this is unlikely.

Prior to this announcement, Chrome had the unique distinction of being the only major browser to support both technologies. Firefox has never supported H.264 and will not in the next version, but Internet Explorer 9 which will be released sometime in 2011, does. Currently the only other mainstream browser that supports WebM is Opera, but Firefox 4 will enable support for that technology after it is released. Safari provides no support for WebM, nor does any current or future version of Internet Explorer.

Factor in Ogg Theora, and you have a codec that is almost universally supported by Firefox, Chrome and Opera… just not Internet Explorer or Safari.

Confused? Yeah, me too.

The reasoning for all of this comes down to licensing, something most end users don’t care about. We’re generally just happy when technology works as advertised. But Google doesn’t want to pay anyone for anything they don’t have to, and supporting WebM means not paying as much money or being bound to a restrictive license agreement.

Chrome used to be the browser that would play any of the three major HTML 5 video formats. Going forward from today, it has voluntarily neutered itself.


Originally published at techvirtuoso.com on January 11, 2011.