Choosing the wrong side in a format war

MSFT finds itself in this situation after the HD-DVD format it backed was finally consigned to the dustbin of history after Toshiba announced that it will stop producing the players. This was a domino effect, starting with the studious announcing Blu-Ray exclusive production, Netflix switching and finally WalMart saying the last word.

That leaves the question of what to do with all those XBox 360s with HD-DVD drives which are going to be about as useful as a brick in a few years. In fact the decisive and abrupt BluRay victory has just created a large collection of expensive and useless gadgetry overnight. Consider the dual-mode Samsung players that could play both HD-DVD and BluRay, in an uneasy truce to allow customers to hedge their bets on the war. With a clear winner emerging from the format war, all of the effort goes out the door. On the bright side Samsung will fare better than the HD-DVD camp because the company itself hedged its bets.

There is going to be frustration among the early adopters who guessed wrong– but that’s the cost of doing business on the leading edge. Just ask the initial round of iPhone buyers after the price drop. Long term consumers are probably better off because standardization will increase sales of players by removing the cloud of uncertainty. More players will drive down costs, and increase availability of content. It may also cement Sony as the new hegemon unseating the reigning oligarchy of the DVD Forum, depending on how the licensing around patents and royalties for use of BluRay technology are structured.

cemp

The conscience of a mutual-fund manager

“Upon reflection it doesn’t take long to realize that we were living for more than two decades in the Age of Decadence. This decadence was so prevalent that everyone from the government down to the regular citizen was an accomplice. During this period we saw America continually make the wrong decisions, lose its industrial might, damage its national balance sheet, and erode the reserve status of its currency.”

This could have passed for a stump speech by an aspiring politician sharpening his/her rhetorical skills for November. Instead it comes from the opening paragraph to the annual report for a mutual fund. The private Swiss bank Julius Baer is more likely to make the headlines these days because its role in shutting down the controversial Wikileaks website than any flourish with prose. Yet a quick peek at the report covering the period ending 10/31/07 reveals a different side of the culture.

Mutual fund reports and statements are invariably written in a dry, legalistic language designed with only one purpose in mind: minimizing liability to the company from a litigation-happy client who is looking for a scapegoat to blame after losing their shirt on trading straddle options on the Zimbabwe stock exchange. Disclaimers about past performance not being an indication of future results are everywhere, as are doom-and-gloom, danger-Will-Robinson caveat abouts the risks of non-diversification, short-term fluctations, exposure to emerging markets and the health hazards of consuming trans-fats. At least one section of the Julius-Baer report is a far cry from this content-free boilerplate:

“We also created structural imbalances and excesses in our economy that led to one bubble then another—the least painful way to contain one bubble is to create another; hence postponing the day of reckoning. In this period, we made useless financiers fly-by-night billionaires, destroyed most American’s living standards by depressing their wages and sinking the dollar against most currencies known to man—with few exceptions such as the Zimbabwe dollar. ”

Such moral outrage and indignation against incompetent fiscal policy and income inequity can’t be a very common sentiment in the financial sector. Penned by Rudolph-Riad Younes, long-time manager for the successful International Equity Fund, ticker symbol BJBIX, now closed to new investors, these words carry a strange sense of gravity more appropriate to an oped column than an announcement of financial results. (Full disclosure: this blogger owns shares in the fund.) It only gets better as Younes takes aim at other sacred cows:

“The Fed has shirked many of its responsibilities: by allowing asset bubbles to form unfettered; by maintaining ultra-lax monetary policies; by neglecting its regulatory oversight authority; and, by succumbing easily to the faintest political pressure. […]
The rampant decadence at the top trickled, as expected, all the way to the bottom resulting in two major bubbles while laying the foundation for future ones.” 

What follows are brief retrospectives on the tech bubble and the more recent housing bubble. One of the most interesting arguments is in the section labelled “The Cardinal Sin: Believing in Santa Claus.” Here he argues that a good deal of the problem originated with the Federal government revising its inflation measure to a completely different benchmark which made the figures come out significantly lower, very conveniently thank you– the equivalent of tampering with the speedometer as a way to speed up the car. A dangerous implication is that “true” inflation rates driving economic forces stand at 4-6% above stated numbers.

Finally throwing in a simple metaphor to emphasize the folly in case it was lost on the reader:

“In short, the government (the parents) invented Santa Claus in order to cheer up pensioners and laborers (the children) who were worried about their parent’s ability to pay for their entitlements (gifts). The whole family was happy with Santa Claus. The children were happy with the yearly gifts and parents were satisfied that their children were buying the fairly tale and able to rein in spending. But as in real life, it is a blessing only when children believe in Santa Claus and a tragedy when parents do!” 

No happy endings here though. The report concludes with predictions of more decadence and bubbles. Great reading overall.

cemp

RIP Cafe La Fortuna– one last cup of coffee

This is a good time to take a break from writing about work and observe the end of a New York City institution. Cafe La Fortuna. Yes the oldest cafe on the Upper West Side and a favorite haunt to its residents– John Lennon and  Yoko Ono included– is closing today.

According to the New York Times article the culprit is the same one that made New York claim the #4 spot on the list of most miserable cities in the US: ludicrous real estate situation. A change in the ownership of the building meant transition from an almost rent-stabilized situation to completely insane market prices. The milestone is also covered by CNN, Gothamist and amNY.

To get a sense of the history here: the table Lennon used to sit at and featured on the cover of the single Nobody Told Me was retired by the owner “Uncle Vinny” after his death in 1980, but remained stacked with memorabilia in the front window. It was recently gifted to Yoko Ono. So surprised was she that they had kept it for 25+ years that she wrote a letter thanking him, which hangs framed on the wall.

One long-time customer quoted by the NYT put it very concisely:

“I’ve told many people,” he said. “When this place closes, it’s time to leave New York.” 

This blogger could not agree more.

cemp

Rumors of Windows server platform “failure” slightly exaggerated

This article which made it to Slashdot recently and the linked postback from CNN/Money could use an application or two of Occam’s Razor. It stipulates that the MSFT bid for Yahoo is prompted by an internal recognition that the Windows server platform has failed. The company having seen the light, according to this commentator, is going after systems built on the Linux/Apache platform instead.

“Microsoft runs on the Windows platform and it has proved inadequate to run big Internet companies. There is not one big Internet company – and I mean “BIG” like Google Inc. (GOOG), Yahoo, Amazon.com Inc. (AMZN), eBay Inc. (EBAY) and such – that runs on Windows besides Microsoft. Its software platform has been a disaster supporting its search engine, email and other free services.”

It only takes a second to recognize this as uninformed drivel: Hotmail/Windows Live Mail is the world’s largest email service period. Passport/Windows Live ID is the largest online authentication system. When it comes to instant messaging, MSN/Live Messenger is not to far behind Yahoo and AIM– never mind the branding confusion between MSN verses Live. All of them run on W2K3, IIS , SQL Server and the accompanying much criticized baggage. It’s not a recent phenomenon either: in the late 90s MSR built TerraServer— long before viewing satellite imagery was an everyday activity– to showcase the scalability of a massive data warehouse running on Windows.

Yet the quote above does raise an interesting question about why more large scale web services are not built on top of Windows. The obvious reason is easy to shoot-down: the difference between shelling out $$$ for W2K3/W2K8 or getting Linux for free. It’s true that a single license for server can run into the hundreds of dollars depending on the particular SKU and thousands of dollars for the more esoteric 64-bit variants. This is why hobbyist sites, non-profits and small-businesses (as well as the virtual hosting companies catering to them) are more likely to prefer open-source software, because of the extreme price sensitivity in the market segment. Assuming that the distribution of internet facing websites has a very large tail fitting that category, this would explain why Netcraft surveys continue to show Apache leading IIS 50% to 35%, in spite of huge jumps in April ’06 and September ’07 that narrowed the gap from previous 3x difference.

But in the enterprise context, the gating factor becomes recurring costs for running a data-center: all of that IT staff, leasing the space and power used adds up. The upfront purchase price of hardware and software is dwarfed by operational costs– and that’s one reason why Windows server platform continues to make inroads into this segment, joining Linux in slowly chipping away at the market share of the more expensive UN*X variants that once dominated the server business. Nowadays it is not rare to see entire IT infrastructures of companies run on Windows and developed using .NET programming models.

What about large scale Internet services? This is the mystery: the existence of very large-scale (in at least two cases cited above, the largest period) services running on Win32 and Win64 proves it can be very competitive. In that case the nagging question remains, why are there are so few examples outside Microsoft?

cemp

Email storage and a lump of coal

TreeHugger is not the first to notice that computing technology can have environmental impact and different “systems” can be greener than others. In an invited talk at Microsoft Research in 2004, Andrew Shapiro from the Berkman Center and author of The Control Revolution raised the question of whether Linux could be deemed more environmentally friendly because it ran on lower-end hardware that would not meet the base requirements for modern Windows SKUs. (He was polite enough not to answer this question given the audience.) Similarly it is widely acknowledged that data centers today are gated by cooling and power consumption– air conditioning being one of the prime resource hogs– and availability of power generation is a significant factor in selecting “hot-spot” locations for building them.

TreeHugger post frets over the cost of email storage and wonders whether deleting email will curb carbon emissions. Good intentions for sure but the calculation may have been slightly off base for several reasons. First the bad news: storage in large-scale services like the one cites in the article are replicated. There can’t be just one copy of the message sitting around. Try explaining to a user that you lost all of their vacation pictures because drive #3385 failed– the so-called “we blame Seagate” approach.  That implies the figures are underestimating the true impact. That would be true only in a simplistic model where  power consumption scales with amount of data stored. Transaction capacity is often the determining factor for data center design. If one million people are checking email at the same time, enough servers have to be up and running to process those requests with tolerable latency. That’s true even if everyone keeps an empty inbox.

Similarly different storage architectures can lead to very different resource consumption patterns. If drives are directly attached to server, then more storage means more servers even if the servers sit idle CPU-wise. If the service uses a storage array network (SAN) then only drives are being powered and not all the extra baggage that would come with a full-fledged server. This is similar to the difference between using a networked drive at home verses another general purpose PC for handling backups. Finally there is the storage corollary to Moore’s law: disk sizes increase, price drops and so does power consumption per GB. (Unfortunately there is also a storage corollary to Peterson’s principle which states that data expands so as to fill the drive available.) It’s true that less storage will achieve some reduction but the Treehugger article probably overestimates this by several orders of magnitude. And if hosted cloud service were comapred to storing the same amount of data at home, there would be no contest: those massive data-centers achieve economies of scale and corresponding eco-efficiency not available to the average consumer not living off-the-grid with solar panels.

cemp

Old-school voting machines

Reminiscences from Robert Holt Jr.  who has been working as a voting-machine technician in New York city for over 20 yeras, as quoted by the New York Times:

“To tell you the truth, I like these machines. With all the problems they’re having with the computerized machines, these are solid. You can’t tamper with them.” 

“The ones who lost canvass the machines and see how many votes they lost by, machine by machine. Sometimes they come in angry. They’re upset they lost, but there’s nothing they can do. A loss is a loss. These machines don’t lie. What you see is what you get.”

Here is to machines that don’t lie and more importantly are transparent in allowing that alleged honesty to be verified by everyone.

cemp

NDSS, final day: “minding the gap”

(Trying to write about the conference before the recollections fade.)

Dan Kaminsky was scheduled to be the invited speaker on Wednesday morning , tentatively titled “On breaking stuff” but he was held up by consulting work at IOActive. Fortunately for the conference program committee, Paul van Oorschot volunteered to give a talk on short-notice and the result was the highly engaging “Security and usability: mind the gap” presentation.

He first started with some anecdotal evidence on the sad state of affairs in what should have been the poster child for usable security: online banking. One of the largest banks in Canada promised to refund 100% of losses resulting from unauthorized transactions– provided the user lived up to their side of the agreement. This fine-print in the customer agreement (granted nobody pays attention to that) makes for entertaining reading:

  • Select unique and not easy to guess password– and user will judge the quality of their password how? Windows Live ID has a password quality meter but this is far from being a standard feature.
  • Sign-out, logoff, disconnect and close the browser when done (What is the difference between first two? Disconnect means yank the network cable?)
  • Implement [sic] firewalls, a browser with 128-bit encryption and virus-scanning. As van Oorschot pointed out, the bank probably means “deploy” rather than “implement”– otherwise they would drastically narrow the potential customer base to developers with copious spare time for writing code from scratch for commodity purposes.

It only gets worse from there. The general pattern is promises of security and reassurance that damages will be covered in exchange for vague expectations of “secure behavior” form users who are often not in a position to accurately judge risks associated with their use of technology. Case in point: one study on malware found that 95% of users had heard of the word “spyware” and 70% banked online– yet some assumed that spyware was a good thing– 45% did not look at URLs and 35% could not explain what HTTPS meant. The status quo for online banking is not an isolated incident, as other case studies drawn from two recent publications van Oorschot coauthored:

  • An evaluation of Tor/Vidalia/Privoxy for anonymous browsing, which concluded that Tor is not ready from prime-time use by a novice even with the supposedly user-friendly Vidalia UI. (Given its remarkably low bandwidth and high latency reminiscent of the early “world-wide-wait” days of dial up, you have to wonder if a usability study was necessary to reach that conclusion.)
  • Usability study of two password managers with 26 non-technical users that found several problems, including situations where users falsely concluded a security feature was functioning when it was not– the very dangerous “false success” scenario. [Full disclousure: this blogger had reviewed and broke an earlier version of one, PwdHash.]

If poor usability is a security vulnerability as much as a flaw in a cryptographic protocol, what is the prescription? This is where the information security community is now wrestling with its collective conscience. van Oorschot made the frank observation that usability and HCI issues are routinely looked down upon by CS culture, not included in the traditional curriculum because they are  easy/trivial and better left to “people who can’t write code” to sort out. He raised the possibility that we had it wrong all the time: cryptography is the easy bit, secure system implementation is far more challenging and the hardest task is building usable secure systems.

cemp

NDSS, day II: Virtualization and security panel

The highly anticipated panel ended up taking a different turn. My colleague Tavis from Google could not attend, leaving DJ Capelis from UCSD as the only other skeptical voice to point out the risks from virtualization. (Recap: Last year Tavis found several problems in qemu, Virtual PC/Server and VMware virtualization platforms.) Intel was represented, and so was AMD with John Wiederhirn and Tal Garfinkel attended for VMware completing the  viewpoints at the table: hardware, virtualization platform, security research.

Most of the discussion implicitly focused on the server consolidation scenario, without spelling out the other uses of virtualization. Briefly consolidation scenario is about replacing multiple physical machines by a single powerful box that runs a VM with the equivalent OS/software configuration for each PC displaced. It sounds like re-arranging deck chairs but in fact this is a major cost saving opportunity for enterprise IT departments. A single powerful, expensive server hosting N virtual machines is by far easier to maintain than N low-end servers each running a different configuration. And in the long run the cost of maintenance dominates the original purchase cost of the hardware. Full machine virtualization creates new opportunities because it allows very clean consolidation between applications that could not otherwise live on the same bare metal: for example a legacy W2K3 line-of-business app alongside a new W2K8 terminal server or even Linux and Windows coexisting side-by-side.

This is the most commercially viable market for virtualization. VMware has been leading the charge and MSFT giving chase with Virtual Server R2 and the upcoming hypervisor in W2K8.  But focusing on it alone skewed the discussion, setting the stage for a predictable debate around trade-offs. Separate hardware is an isolation boundary: it keeps different applications from interfering with each other accidentally or by malicious logic. Virtualization is another one, as are operating system processes, BSD jails etc. Each one has an assurance level from security perspective or equivalently an attack surface. Server consolidation with a VMM involves changing the isolation boundary and creating new attack surface. There may be new channels for one VM to attack another when running on the same bare-metal, while separate boxes would have been confined to the network or shared storage etc. Quantifying that incremental risk and sharing opinions on whether it is a reasonable trade-off fueled much of the debate.

This is a comparison virtualization can not win on the single dimension of risk. Considering the extent VMs are used for malware research and quarantining untrusted code, it’s surprising that other applications were not considere. The flip-side of consolidation is sand-boxing: moving applications running in the same trust boundary to different VMs is a corresponding improvement– although the extent that it improve security is again debatable and dependent on the quality of implementation.

As a side note, the moderator raised a point about reduced customer choice: with individual machines one has a choice of different vendors to buy a network switch from. With the functionality of the switch subsumed in the software stack that choice goes away.

cemp

Dispatches from NDSS: Day I, breaking online games

Gary McGraw gave the talk “Breaking Online Games” at other conferences before, so this may be repeat material for some who have attended BlackHat or CSS in Washington earlier this year. (One difference is that apparently few security researches play World of Warcraft in the NDSS audience, neutralizing some of the gamer jokes.) At first the concept of cheating at online games seems out of place at a conference focused on fundamental security problems with a pragmatic bent: phishing, botnets, spyware, vulnerability research etc. But as McGraw pointed out there are two key observations making this topic very relevant:

1. MMORPGs foreshadow the future of massively distributed systems. World of Warcraft recently cracked 10M users (the slides had 8M, demonstrating how rapidly presentation material becomes outdated in this field) with up to half million online simultaneously.

2. There is real dollars at stake. Games like Linden’s Second Life– much smaller than WoW but far more visible in the media– have spawned a virtual economy that maps to transactions in the bricks-and-mortar economy complete with lawsuits. Even the devaluation of the dollar against foreign currencies such as the euro has a parallel in the going rate for gold coins. Cheating at online games then is about ill-gotten gains, a familiar theme for cybercrime.

The presentation itself was a broad overview of the security challenges in online games and stories of organized “exploit” opportunities it has given rise to, with references to the accompanying book. (There were also interesting digressions into eggregious EULAs, because it turns out World of Warcraft includes one to cover an ineffective anti-cheating solution that functions like spyware.) One implied conclusion is that designers for online games don’t in general grok the concept of security: traditionally it meant protecting the game against cracking and pirated distribution. The problem of contending with untrusted clients “outside the trust boundary” as McGraw puts it has not made it into the design philosophy.

cemp

Time-Warner cable and the value of reliability

Time-Warner cable experienced an outage in Internet access two weeks ago in New York that lasted almost a full day. The service is a joint-offering with Earthlink, so it is not clear where the blame goes. Such large scale service failures can happen: a number of undersea cables were cut in the Middle East, affecting net access in Egypt among other countries. The fact that this can happen in Manhattan of all places is another story. But even more disconcerting is the way Time-Warner believe customers should be compensated: by offering a refund for a single day which amounts to roughly 3% off the monthly bill. TW/Earthlink is trying to price reliability here, and that have significantly undervalued it.

No service can guarantee 24/7 uptime. But a service that is advertised with availability  99.999% of the time is not simply worth just another 2.999% over one that only works 97% of the time. It is far more valuable because at that limit diminishing returns have kicked in. Adding one more nine to the availability number requires a lot of investment. As the service-level guarantee increases, the system designers must contend with increasingly esoteric and improbable events. A very simplified example: a RAID array can ensure that a computer will survive a single drive failure– an event that happens with disturbingly high frequency for machines that are running under load all the time– by using multiple drives as redundancy. So if disks are fail 1% of the time and this is the most likely problem, 99% uptime is achieved by investing in improved storage solutions. But suppose there is a smaller 0.1% chance that the entire data-center can go up in smoke or the power can fail longer than the on-site generators can compensate. This is a lower probability event but being prepared is more difficult. Adding more drives  does not help because their failures are correlated. The same fire will take out all of them. Dealing with the less likely but more catastrophic event calls for building a brand new data center some place else and adding software logic to handle fail-over in case of an outage in the primary site, a much more expensive proposition.

Time-Warner assumed  that if customers are paying $30 for an almost always reliable service, they should have no problem paying a few percent less for one that experiences a massive outage every month. In fact Internet access advertised up front as working only 97% of the time would be worth much less and provide stronger incentives for customers to switch to an alternative such as fiber to the home.

Update: TW/Earthlink experienced another outage on Friday. This time they were apparently prepared: customers calling the support number were greeted with an automated recording announcing that New York was experiencing service problems. Meanwhile the otherwise reliable Verizon wireless access card had crawled to a halt when this blogger pressed it into service as a back-up, probably because other users had the same idea and Verizon did not expect to become the alternative broadband provider for a chunk of Manhattan.

cemp