About those strange P3P compact policies (2/2)

With the background on P3P compact policies covered in the first part of this series, time to answer the vexing question: why do nonsensical P3P policies appear to meet the Internet Explorer privacy settings?

This is partially a consequence of the way IE privacy settings are specified. As described in MSDN, compact policies are evaluated using a rules-based system, triggered by the presence or absence of specific policy tokens. For example the token CUS stands for “customization” and is part of the P3P vocabulary for data collection purposes. Similary FIN is a token indicating the category of data collected, in this case financial information. IE privacy engine is a series of rules where the condition is that presence/absence of some combination of tokens and the action defines what to do with that cookie. For example it is possible to state that if financial data (FIN) is being shared with third-parties (OTR) and the user has no recourse (no presence of LEG token) then reject the cookie.

In principle this mechanism is expressive enough to implement either blacklist or whitelist approach. In the first case, one accepts all policies except those containing certain combination of tokens, which are subject to additional restrictions. In the second case, the browser is more strict and by default rejects/downgrades cookies except when the policy meets a particular criteria. Looking at the medium privacy settings which are the default for Internet zone, IE takes the former approach– the default action attribute is accept.

The catch is that if Internet Explorer runs into unrecognized tokens such as “HONK” it will simply ignore these. The original motivation for this is forward compatibility: IE6 was finalized before P3P standard itself was completed, creating the possibility that the vocabulary could be expanded. In fact even if P3P standard had been finalized as W3C recomendation, that would be version 1.0– future revisions could introduce new tokens, with the result that users running earlier versions of IE would be faced with unrecognized tokens. That mindset is hard to imagine today when software is updated periodically, and often automatically. In 2001 the picture was different, with no monthly patch-Tuesday or near instant Chrome updates.

There is also a correctness problem in ignoring unknown tokens, in conjunction with the blacklisting approach used for settings. Any new token introduced in the spec could have signalled some pernicious data practice worse than those that existing rules were trying to block. Ignoring the new token in that case results in a decision resulting in less privacy and more cookies accepted than intended. This highlights a cultural preference common to MSFT at the time, in favor of failing open, favoring compatibility at all costs over privacy/security. (Trustworthy Computing has been successful in shifting that attitude.)

In reality of course P3P never went anywhere, with the W3C group eventually disbanding in frustration, citing “… insufficient support from current Browser implementers for the implementation of P3P 1.1.” That was 2006. With the vocabulary stabilized, a more strict parser could have been implemented. Even admitting for the possibiltiy of new tokens, sanity checks could have been added: since compact policies are supposed to be derived from a full XML policy, the well-formedness requirement for the XML rules out certain situations such as empty policy without any valid, recognized tokens.

With the perfect hindsight of 10+ years, that is one feature one of the designers regrets not implementing.

CP

About those strange P3P compact policies (1/2)

There are times when past mistakes come back to haunt the designers and developers of a system in unexpected ways. The implementation of the privacy standard P3P in Internet Explorer is proving to be that example for this blogger.

First some background: P3P stands for Platform for Privacy Preferences Project. P3P was forged over a decade ago, amidst the great privacy scares of 2000, in what can be seen as a more innocent/idylic time before September 11 when the greatest threat to online users were evil marketers trying to track users with third-party cookies. Under the charter of the World Wide Web consortium’s Technology and Society group, P3P was an ambitious effort to introduce greater transparency and user control over the collection of information online. In many ways it was also ahead of its time. In the vein of similar initiatives that attempt to prescribe technological fixes to what are fundamentally economic incentive problems, only a tiny fraction of the ideas found their way into widespread implementation. (It would be another 10 years before W3C would dabble on the policy front, with Do-Not-Track, instantly getting mired in as much controversy as P3P in its heyday. To think– DNT introduces just one modest HTTP header representing a yes/no decision. P3P is enormously complex by comparison.

In the original vision, websites express their privacy policies– often couched in legalese and not written with the purpose of informing users– in machine readable XML format. The web browser could then retrieve and compare these policies against the user’s preferences as they navigated to different websites. P3P even proposed a machine-readable standard for expressing user preferences called APPEL, also in XML naturally, which went nowhere. It’s difficult to argue against greater transparency– although several advertising networks managed to do precisely that, out of concern that shining a light into data collection practices could paint an unflattering picture.

Earlier iterations of the protocol also had serious disconnects with the way web browsers operate and their focus on performance. Blocking, synchronous evaluation of privacy policies for every resource associated with a web page, as originally envisioned in the draft spec, would have been an enormous speed penalty. With some reality checks to focus on improved efficiency, attention eventually focused on the perceived privacy boogeyman du jour: HTTP cookies. In order to avoid out-of-band retrieval of privacy statements, compact policies were introduced, as a summary of the full XML policy that could be expressed succinctly in HTTP response headers accompanying cookies. Compact policies are derived from the full XML version via a deterministic transformation. This process is lossy and  produces a worst-case picture: while the full XML format allows specifiying that a particular type of data (say email address) is collected for a specific purpose, retention and third-party usage, the compact policy simply lists all categories, all purposes, retention times etc. as one dimensional list, collapsing such subtle distinctions. Still compact policies could be specified in HTTP headers or even in the body of the HTML document, allowing fast decisions about cookies.

So what was implemented in practice? Internet Explorer ended up being the only web browser supporting P3P and a very specific subset at that: (Full disclosure: this blogger was involved in the standards effort and implementation in IE.)

  • IE uses compact policies for cookie management.
  • IE does not evaluate full XML policies or otherwise act differently based on the presence/absence of that document. It does not even make an attempt to retrieve the XML or verify its consistency against the compact policy. There is an option under the privacy report to retrieve the policy and render it in natural language, if the user went out of their way to ask for it. (Not surprisingly many sites only deployed compact policies, never bothering to publish the XML.)
  • No APPEL or other automatic policy evaluation triggers, for example before submitting a form or logging in to a new service when it would be a useful data point for the user.

Even with this subset, P3P had significant effect on web sites because of its default settings. Belying the assertion that default settings are just that and easily modified by users who disagree with them, the default choice of “medium” privacy became the de facto standard for websites that depended on cookies. First-party cookies were given a wide berth– not requiring a compact policy and permitting existing usage to continue functioning without any changes– third-party cookies without an associated satisfactory were summarily rejected. That means not only advertising networks must implement P3P, they must have a policy that meets the default settings for IE Otherwise all of those banner-ads and iframes with punch-the-monkey animated Flash ads get stripped of their cookies, losing their capability to accurately track distinct users.

This is a great example of regulation by code as Lawrence Lessig described it brilliantly in “Code and other laws of cyberspace.” By choosing a particular default configuration in the most popular web browser, MSFT had established a minimum privacy bar for a segment of the online industry. (The irony is inescapable: at the same time that MSFT was trying to discredit Lessig in the antitrust trial, the engineers were busy providing a textbook example of his central thesis around regulation via West Coast Code.)

[continued]

CP

Secure elements and mobile devices

After the previous post covering NFC modes in Android, time to turn our attention to a closely related subject: the embedded secure element.

In principle a hardware secure element can be viewed as completely independent entity, completely orthogonal to whether there is NFC capability on the same device. Sure enough such a “secure element” already exists in a good chunk of the phones: the lowly SIM card, or UICC as it goes by its formal name, is a type of secure element capable of executing security critical functions. Its raison d’etre is the storage of authentication keys for connecting to GSM networks, a scenario near-and-dear to the mobile carriers. But as is often the case, market demand has influences hardware requirements: the driving force for including an SE (or even a second SE, counting the SIM for GSM devices) is tightly coupled to the primary NFC use case: contactless payments.

The secure element is a system-on-a-chip or SoC– which is to say that it has its own processor, RAM and persistent storage. It can be viewed as a tiny computer inside the main “computer” that is the smart phone. That in itself is not very remarkable, as the average phone contains plenty of such chips: everything from the Bluetooth adapter to the flash controller could arguably meet that definition. What differentiates the secure element?

  1. Locked-down operating sytsem which can not be directly controlled by the host device. In other words, Android OS even with root privileges can not reflash the contents of the SE, read/write out its memory or install new code. (Managing SE requires privileged access authenticated by cryptographic keys for such operations.) For most other chips such restrictions are undesirable. For example, it is important that the Bluetooth controller can have its firmware updated locally as the OEM releases updates or bug fixes.
  2. Hardware tamper-resistance measures designed to guard against attacks that involve direct physical access to the chip. This includes intrusive attacks such as peeling open the chip to try to read its EEPROM directly, or attemptign to cause glitches in the execution by subjecting it to environment stress, heat, over/under power, zap with laser beams etc.
  3. Built-in root of trust, with unique identity. It is possible– for the parties armed with the right cryptographic keys– to authenticate an SE remotely and set up a secure channel where communications to/from that SE are not visible to even the host operating system.

Secure elements appear in any number of different physical form factors, ranging from the very familiar “smartcard” in ID-1 format (typical dimensions of credit-card) to USB tokens employed for authentication in enterprise settings. While these objects seem “large” in relation to the size of a mobile device, it should be noted that the bulk is not taken up by the electronics. (In particular, the brass-colored metal area on a smart card is not the size of the IC– those are the contact points for interfacing with a card reader, for which the dimensions are fixed by international standards.) The chip itself is tiny and continues to shrink over time as fabrication techniques improve. By contrast overall physical dimensions are subject to interop constraints, such as being wide enough to cover a USB slot.

In the spirit of experimentation, different form factors have been tried for incorporating a secure element into a mobile device:

  1. SIM card and its smaller brethren found in iPhone (Becaues the Apple design has to be different and incompatible)
  2. MicroSD cards, which include a secure element such as  Giesecke&Devrient Mobile Security Card and Tyfone SideSafe designs. These combine both mass storage suitable for the SD slot on a phone, as well as a secure element accessed over the same interface. (Tyfone even boasts a version with integrated NFC.)
  3. Embedded SE coupled to NFC controller– this is the Android architecture, where the secure element is part of the phone.

The list does not even include ways that an external SE can be used in conjunction with the  phone. For example there have been mobile payment designs based on stickers, where a sticker containing an SE and integrated NFC antenna is applied to the back of the phone. (These end up being relatively thick, because a layer of ferrite is necessary to separate the antenna from metal on the back of the phone.) Likewise the US government adoption of smartcards with CAC and PIV programs has inspired highly awkward looking sleeves and Bluetooth card-readers designed to allow reading such cards from a mobile device.

CP

Android and NFC modes

Quick note about the different modes for NFC usage supported in Android:

  1. Reader/writer mode. This is probably the most common scenario. The host device functions as the active participant, while on the other side is a passive tag that powered by the induction field generated by the phone. Examples include scanning a URL from a tag (such as Pay-By-Phone stickers on parking meters in SF, or reading information from a US passport) The Android NFC stack provides extensive support for this mode, in callback model: applications can register to receive notifications on discovery of tags, either by NDEF or tag type– such as Mifare classic tags or all ISO 14443 smartcards.
  2. Peer-to-peer, the basis for Android Beam. It is not possible to directly use this mode via Android API either on the sender or recipient side. Instead applications can declare an NDEF record to be transmitted if a beam-transfer is initiated. The stars have to align for this, with another device in RF range of the phone and transfer is confirmed by the user by clicking on the screen at the right instance. (There is also an optimization to register a callback to create that NDEF record on demand, without committing to it in advance.) On the recipient side, Beam is handled directly by the NFC service by invoking the right application.
  3. Card-emulation. In this mode the phone emulates an NFC tag. Specifically for Android card-emulation involves means routing communication from an external NFC reader directly to the embedded secure element, which can appear either as 14443 contactless smart-card or Mifare classic tag. The host operating system is completely out of the picture: the traffic goes direct from the NFC antenna to/from the SE, without traversing the Android path at all. It follows that applications  on the host OS have no control over the data exchanged in this model, except indirectly by influencing the behavior of applets present on SE. Card emulation is used for Google Wallet, to execute contactless payments with secure element as well as offer redemption. By default card emulation state is tied to the state of the screen: CE  is on only when display is on. (As an aside: the screen does not have to be unlocked. This is in contrast from reader/writer mode where the polling loop will not operate when screen is locked. For this reason it is not possible to scan tags without first getting past the Android screen lock, while tap-payments can be initiated by simply turning on the display and holding the reader against an NFC reader.)

Card emulation mode is particularly interesting because it allows the phone to function as a smart-card and substitute for single-purpose dedicated cards that were traditionally used in scenarios such as transit, physical access control and identity/authentication. In other words, subsuming the capabilities of an EMV contactless credit-card is the proverbial tip of the iceberg.

CP

GoDaddy outage and lessons for certificate revocation (2/2)

Windows includes a helpful utility called certutil that serves as a Swiss-army knife for trouble-shooting PKI problems on that platform. One of the options can be used to look at URL cache entries, where previously obtained OCSP and CRLs are stored using the -urlcache option. By running this query and looking for objects associated with GoDaddy one can determine the extent revocation information that would have been available to the client locally, if further network requests were ruled out.

Running this experiment on a couple of actively used Windows 7 machines shows a decidedly mixed record:

  • On one machine there were no GoDaddy entries at all. In this case all revocation checks for GoDaddy sites would have fail.
  • On another laptop, there were two dozen OCSP response as well as CRLs for root and intermediate issuer.

Actively used is the operative keyword here, because paradoxically the effectiveness of revocation checking as implemented on Windows is directly correlated to its frequency of use. The chain-building engine contains sophisticated optimizations on when to prefer CRL over OCSP (if multiple certificates are checked for a given issuer, it become more efficient to download the CRL) and also which issuers are most frequently observed, to allow prefetching those OCSP/CRLs ahead of time before the current ones expires.

(As an aside, this makes revocation checking something of a cooperative enterprise between multiple applications on the machine. Everyone wants to avoid doing a costly CRL/OCSP check over the network, hoping that there is a cached response already in the cache. But to the extent that applications skip revocation checking or instruct CAPI2 to use offline checks based on cached information only, the chances of that happy condition occurring goes down. This is why applications such as Chrome which “defect” from revocation checking are doing a disservice to other applications using the feature.)

The sensitivity of caching to navigation patterns is helpful. Any website the user visits often, will likely have an OCSP response cached, helping tide over any temporary outages of the certificate issuer when visiting those sites again. In fact if the user happened to visit may sites with GoDaddy issued certificates, it may even exceed the threshold where CRL download is triggered, covering all sites– including those not yet visited– affiliated with that issuer. While navigation history is highly clustered around particular sites and this makes the first case realistic, there is no reason to expect any correlation that multiple sites users visit are more likely to have certificates issued by the same CA.

There is one more ray of hope: OCSP stapling. This an SSL extension that permits the server to return a recent OCSP response to the client, saving the client from having to do the lookup on its own. In principle this would also increase resilience against outages of the OCSP responder, as long as the server has a fresh response obtained prior to the outage. (This still has edge-cases around a brand new server being deployed from scratch or perhaps rebooted during the outage. Typically it would need to reach an OCSP responder as part of initialization.) In reality the less-than-stellar uptake of this optimization outside of Windows platform means it would have been of limited use in the GoDaddy debacle. This may change in the near future. For example nginx recently announced support for OCSP stapling.

CP

GoDaddy outage and lessons for certificate revocation (1/2)

One of the unintended side-effects of recent GoDaddy outage was providing a data point on the debate around whether certificate revocation checks can be made to fail-hard. In addition to being the registrar and DNS provider for several million websites, GoDaddy also operates a certificate authority. The outage inadvertently created an Internet-wide test of  revocation checks to operate in offline mode.

Quick recap: when establishing a secure connection  to a website using SSL, web browsers will verify the identity of that site, in a format known as X509 digital certificates. Most of these checks can be done locally and are very efficient in nature. For example, verifying that the certificate has been issued by a trusted authority, that it is not expired and the validated name specified in the certificate is consistent with the name of the website the user expected to visit– eg what appears on the address bar. But there is an additional check that may require looking up additional information from the web: verifying that the certificate has not been revoked by since its time of issuance.

It is fair to say that web browser developers hate revocation checking because of its performance implications. The web is all about speed, with each browser vendor cherry picking their own set of performance benchmarks. Over time an extensive bag of tricks has been developed to squeeze every last ounce of bandwidth available from the network and fetch those webpages an imperceptible fraction of a second faster than the competing browser. Revocation checking throws a wrench into that by stopping everything in its tracks, until information about the validity of the certificate has been retrieved. (In fact it is more than one connection that is stalled: one of the standard speed improvements involves creating multiple connections to request resources in parallel, such as the style-sheet and images concurrently. All of these are blocked on vetting the certificate status.)

Almost always, the revocation checks pass uneventfully. In rare cases the client may discover that a certificate was indeed revoked, saving the user from a man-in-the-middle attack, although there have been no recorded cases of that happening in recent memory. (For the most epic CA failures such as DigiNotar incident, the certificates in question were blacklisted by out-of-band channels and the attacks stopped as soon as they were publicized, long before revocation checks could save the day.) Then there is the third possibility of a gray area: revocation check is inconclusive because the network request to ascertain the certificate status has failed. A conservative design favors failing-safe and assuming the worst. In reality, due to historically low confidence in services providing revocation status (imagined or real) most implementations fail open, and assume the certificate is not revoked. For example Internet Explorer does not even warn about failed checks in the default configuration. This is the basis for rants to the effect that revocation checking is useless— after all, in any scenario where an adversary has enough control over the network to orchestrate a man-in-the-middle attack, they are also capable of blocking any traffic from that user to the revocation provider.

Luckily the situation is not quite as bleak as described above. There are several optimizations in place to avoid having costly, unreliable network look ups for each certificate validated. First there is aggregation within each CA: a certificate revocation list or CRL contains list of all revoked certificates for that issuer. This spreads the network cost of a revocation across multiple certificates. This list in turn can be broken up into incremental updates called delta CRLs, to avoid downloading an ever-expanding list each time.  Finally each update has a well-defined lifetime, including an expected point when the next update will be published.  Combined with the fact that CRLs are signed,  they can be cached both locally and by intermediate proxies at the edge of the network for scaling.

The story gets more complicated when considering there is another way to perform revocation checking: online certificate status protocol or OCSP. This is a more targeted approach to querying trust status– instead of downloading a list of all the bad certificates, one queries a server about a particular certificate identified by serial number. OCSP does not amortize cost over multiple queries, since finding the status for one website does not help answer the same question about a different one. On the bright side, OCSP responses also have well-defined lifetimes times much like CRLs, obviating the need for additional queries during that period. Also in the spirit of CRLs, they are signed objects, permitting caching by intermediate proxies to decentralize distribution.

All of this would suggest that perhaps clients could cope with a temporary outage of  revocation servers, even if they opted for hard-fail approach. (Recall that hard-fail means connections will not proceed without positive proof that certificate was not revoked.) In principle all that caching could permit users to still visit websites when the revocation infrastructure becomes unreachable, when both the CRL distribution point (CDP) and OCSP responder are down– exactly what happened to GoDaddy.

The question is, how well would that have worked out for GoDaddy during its outage?
Not very well, it turns out.

[continued]

CP

Going walletless and the NFC convergence

Wired magazine contributor Christina Bonnington has bravely committed herself to running a month long experiment in living without a traditional wallet, substituting a smart phone instead for all the functions associated with that familiar object.  That includes not only foregoing payments– neither cash or standard credit cards are permitted– but also identification, transit and coupons. At first sight, this is the type of cutting-edge experiment that could go two ways. In the best case it could become a remarkable exercise in pushing the envelope for existing mobile payments technologies, demonstrating how far one can get to pure digital wallet simply by leveraging options on the market today in creative ways. (Bonnington is armed with an iPhone and Galaxy Nexus capable of running Google Wallet. There are no unreleased/secret apps in the mix that would not be otherwise available to the audience, modulo the usual app  compatibility constraints.)

Or it could become the mobile-payments equivalent of failed Biosphere mission, where a group of scientists locked themselves into a sealed ecosystem that was going to be self-sufficient but eventually had to terminate the mission prematurely as conditions inside deteriorated. There is a risk to running such an experiment too early in the development cycle of a technology, before it has achieved a critical mass of adoption to wean off of competing legacy alternatives. In fairness the setting for this experiment is already optimized for success:  San Francisco boasts a dense urban core, and Bay Area has traditional served as an early-adopter of innovations. A quick check shows plenty of locations accepting pay-with-Square, MasterCard Paypass terminals compatible with Google Wallet (Peet’s Coffee, Walgreens and Whole Foods alone go a long ways) and Level-Up locations, among other popular mobile payment options.

With that in mind, one can optimistically look at other scenarios where smartphones hold the promise of some day replacing and consolidating their traditional equivalents:

  • Transit. From the introductory piece: “My usual modes of transport, San Francisco’s Muni bus lines and BART rail system, require a card. So I’ll be doing more walking, biking, and driving.” In an ideal world she need not give up on BART or Muni. After all both of them use the Clipper card, which is based on MIFARE DESfire technology running over NFC. Mifare emulation is already possible with Android phones, as demonstrated by offer redemption with Google Wallet using  single-tap transactions. There are compatibility issues with current generation of hardware as well as provisioning challenges– how to deliver the transit credentials safely to a phone over-the-air, comparable to handing users a plastic card.
  • Event badging. In this year alone, this blogger attended two conferences where the badges incorporated NFC (BlackHat briefings in Las Vegas, and RSA conference in San Francisco, both using Mifare classic tags) Lest we assume this trend is confined to technology conferences: the three-day music festival Outside Lands at Golden Gate Park used NFC tags for passes. In principle all of these roles can be relegated to the smartphone.
  • Physical access. Another observation from Blackhat: the Aria hotel uses NFC tags for room keys. Again this can be incorporated with Mifare emulation capability in NFC hardware, once the provisioning challenges are solved. That means one day checking into a hotel no longer requires a stop at the reception to pick up keys: credentials to unlock the room are delivered over-the-air to the guests’ smartphone before they arrive. Bonnington also noted in her disclaimers “I will not be ditching my house keys.” While Yale Locks announced a door lock that open via NFC,  hotels are more likely to see adoption of such solutions compared to private residences. The set of individuals with authorized access to a particular apartment or house change rarely. By comparison there is a lot more efficiency gains possible in the hotel industry from improving on the card-key access.
  • Employee badges. Closely related scenario for access to shared-spaces: office buildings. Many of these are transitioning from ancient proprietary 125Khz RFID tags pioneered by HID to more standard solutions running on the NFC frequency. In principle these can be replaced by smartphones as well. Popular card-readers controlling access to doors in office buildings are designed to accept NFC cards. For example many of the HID readers are compatible with the US government PIV standard, which uses standard NFC communication in ISO-7816 mode already supported by existing Andoid hardware.

CP

Digital River, Microsoft and code-signing failures

In the wake of the recent Adobe code-signing debacle, this is a good time to revisit other failure modes of code signing. Recently this blogger tried downloading an evaluation copy of Microsoft Office and noticed a strange warning dialog about the installer being signed by “Digital River.” (Granted, that could make the author one of 5 people in the world paying attention to such warnings– and not proceeding with running the installer as a result.)

Who was Digital River and why would software published by MSFT carry a signature of any corporate entity other than MSFT itself? From a pure authentication perspective, the situation looked indistinguishable from a man-in-the-middle attack, where some nebulous attacker on the network observed the download request and clumsily substituted a Trojaned application instead, hoping the user would not notice the difference. The code was downloaded straight from the official Office website, a page that is not served over SSL. Or perhaps the servers hosting the application had been breached and started distributing malware to unsuspecting users hoping for free copies of Office.

Cursory Googling revealed a more benign, mundane explanation that did not involve malfeasance: Digital River hosts the official online MSFT store, serving as the distribution channel for purchasing software via direct download model. (Top search results include a forum post from 2009 featuring an irate customer titled “digital river does not deserve to be microsoft default agent”) But the same search also turned up a disturbing presentation on F-Secure website by Jarno Niemelä, dating back to 2010. The good news: it confirmed that Digital River does in fact handle software distribution for many published besides MSFT, and even digitally signs the applications on their behalf– that would explain the Authenticode dialog above. That alone does not make it safe to proceed past the dialog: all it means is that trust in the purposed Office installer is only as good as the trust in all other software signed by DR. After all any one of the other applications bearing the identical signature could have been substituted in its place, if the only criteria for establishing trust is that certificate. What else has DR signed? That is the really bad news: DR had been caught signing malware, as well as installers which were effectively open-ended: meta-installers designed to invoke other installers from third-party URLs that DR had no control over.

Vouching for the integrity of applications one has no control over is at best extreme naivete, and at worst, willful negligence under the guise of solving a problem for software publishers. There is no question that unsigned code is a user-experience problem: web browsers and A/V react differently, and present  danger-Will-Robinson warnings when confronted with applications of unknown origin. That is by design. “Solving” that problem by having another company sign anything thrown its way undermines any security benefit of code authentication. These technologies are rooted in the principle that trust– or lack thereof– in software  is derived from trust in the identity/brand of the publisher. When that identity is laundered by having some other entity such as Digital River putting its own brand on the product without conducting due diligence, it removes any semblance of accountability from the original author.

Returning to the example that served as the jumping point for this post, Office derives its credibility from having been authored by MSFT– not by virtue of being distributed by Digital River. That same code carry  exactly same degree of trust regardless of its download location. In fact being signed by Digital River subtracts from the credibility of the code, which is exactly the opposite of intended effect. An up-and-coming software company with no brand recognition might benefit from using the service: after all, anything beats the unsigned  code warning. (But then again, Digital River is not exactly a household name either. The only reasons to prefer that over having  your own certificate could be the cost and difficulty of implementing code signing– just ask Adobe.) In the case of MSFT, there is net loss of trust in the end product.

Luckily Office 2013 preview is also available for download. It carries the expected MSFT signature.

CP

Bringing cloud identity to the PC (2/2)

With Windows 8 released to manufacturing and available for download from MSDN, this is a good time to complete the post on using cloud identity in a traditional PC operating system. As MSFT announced on the Building Windows blog almost a year ago, Windows 8 will support signing in with Windows Live ID, now rebranded as Microsoft ID. Instead of creating local accounts, users can now authenticate to a Windows 8 machine using their existing cloud account.

Of course such integration is far from novel, with many examples of familiar consumer devices that had tight integration with a cloud authentication service, in some cases requiring that users authenticate with such an account to setup the device in the first place:

  •  iOS and its use of an Apple account on iPod/iPhone/iPad
  • Android and its integration with Google accounts systems. In fact Android has an extensible account manager concept: it allows defining additional cloud identity providers by having installed applications act as account authenticators, which can be invoked by any other app. (Looked another way, Android re-invented SSPI model that Windows supported since NT4 but never quite at the level of interchangeability its designers hoped– no new ideas under the sun.)
  • More recently Chrome OS and similar integration with Google accounts

In all cases, this identity becomes an integral part of device functionality when accessing cloud-based functionality: for example it is used to backup settings, migrate to new device, download email and calendar entries, make purchases in the respective app markets. This requires a level of integration between the OS and applications, such that after logging into the OS once, the user is automatically also logged into cloud services without having to explicitly type their password again. Without such automatic transfer of authentication state, the initial login would become pure window dressing that only grants access to local system resources. Luckily such seamless integration exists in Windows 8: after logging in, the mail application transparently downloads mail from Hotmail, Sky Drive can access saved files, Messenger can display presence information for contacts and Internet Explorer can open web pages requiring Live ID as already authenticated. In fact since as long as the functionality is implemented as standard SSP, it becomes available to third-party applications to use for creating apps that access user-data stored in the MSFT cloud.

There are also differences: first one is that Windows supports local accounts and the user may be upgrading a Windows 7 box– because nobody is running Vista– already configured with one. This introduces a requirement to retroactively associate an existing account with a cloud identity. Mobile devices started out with the assumption of cloud connectivity, and a clean slate to define their identity scheme. Second the user experience is different: for mobile devices user authentication is rare for good reasons: phones have awful virtual keyboards that make typing plain English painful, much less a strong password that containing  random mixture of symbols and digits. (While Android screen-lock can be configured with a passphrase, this is logically not the same as the Google account password.) With Windows 8 and Chrome OS even unlocking the screen locally can involve some type of authentication, making this ritual more frequent. That also creates a challenge in having to support offline mode: since the device may not have network connectivity at all times, it still has to authenticate the user’s cloud identity without the benefit of reaching the cloud.

Offline mode is not a new problem, as similar issues existed for the bread-and-butter protocols Windows supported before (NTLM and Kerberos) and can be solved by locally caching password hashes, at the well-known risk of introducing brute-force attacks against these cached copies. But some credentials can not be checked offline: an example is the one-time password or OTP codes used for Google 2-step verification: since these are meant to be dynamically generated each time, caching is not applicable and only Google knows what the next code in the sequence is. MSFT has a different concept called single-use codes for Live ID, which is not a secondary factor but replaces the password. It is unclear if these still work for login in connected state; they will likely not work for offline mode.

Stepping back, such tight-coupling between the OS and a particular cloud-identity provider also creates a natural “nudge” for users to favor cloud services authenticated by that identity, since the applications “just work” naturally without additional setup. Consider the difference between having to sign-in to a third-party email or instant-email service, verses going with the path of least resistance using the built-in variant that is automatically signed in. Granted most applications “solve” this problem with a strong bias for saving passwords (as well as annoying opt-out settings to automatically launch as soon as  the user logs in) This may level the playing field for user experience at the expense of security: instead of refreshing credentials over time, they rely on a password or long-lived token to create the illusion of automatic sign-in. Of course in the case of Windows 8, those cached credentials are already at the mercy of Live ID if the user enables one of the highly  touted-features: synchronization of saved passwords across multiple machines, as long as the user is signing in with the same Live ID, similar to Chrome synchronizing website passwords.

 

CP

Credit card authorization: compatibility of CVC1 and CVC3

As discussed in the second post in the series, the magnetic-stripe profile of certain EMV payment protocols such as Paypass produces data in a format similar to what swiping a traditional plastic card might yield. This backwards compatibility has undeniable advantages for transitioning to the more advanced protocols. For example, point-of-sale terminals can have an NFC capability added as bolt-on accessory, without altering the transaction further downstream.

The downside of this seamless transition is that there is no strict firewall separating NFC payments with dynamic CVV3 from swipe-transactions that rely on static CVV1 for authorization. It is possible, albeit inconvenient, to encode NFC transaction data on a plain card and use it at a regular point-of-sale terminal. This explains the replay demonstration that Kristin Padgett performed at Shmoocon in January in 2012. Intended to prove the ease of skimming information from contactless credit cards, this stunt actually serves as an unintended proof of the interchangeability of CVV1 and CVV3. The researcher executed one round of the payment protocol between a hypothetical victim’s contactless plastic card and an NFC reader controlled by the researcher. This transaction does not actually cause any money to exchange hands. It is never reported to the payment network. It is only used to record the transcript of the protocol.  (This is the often hyped “skimming” part: in the US many cards have no additional protection such as PIN required to complete the transaction, unlike in Europe where it is common to have PIN entry above certain transaction threshold.) The transcript is then used to construct track data, encoded on a plastic card and swiped through a Square reader, to conduct the actual “fraudulent” transaction.

To be more precise, and this is where details of the protocol become important, it is not the case that a CVV3 can be substituted whenever the CVV1 appears on track data. The individual fields are not identical between the standard plastic card and emulated stripe from a contactless payment. (Among other things, a contactless payment include a transaction counter or ATC that is echoed in track data, incrementing for each transaction.) Rather the complete, unaltered track-data containing one-time generated CVV3 can to be substituted in place of static track data containing CVV1. The Shmoocon demonstration was a beautiful example of how the entire payment stack, starting with the custom hardware of the Square reader, the payment processing backend of Square and the acquiring bank involved (Chase Paymentech, for all Square transactions) were oblivious to the change in form factor. The whole point of backwards compatibility is that at some point far enough downstream from the terminal, everything looks identical. Square is likely not alone in creating this type of avenue for contactless payments to be tunneled over plain magnetic stripes via swipe transactions.

That is not to say that the discrepancy could not have been detected along the way by one of the participants in the chain. Backwards compatibility is same as indistinguishability. Specifically the track-data format includes a service code with different values defined for cards containing chips, designated as “integrated circuit” in the standards. Track data containing this value arriving over a magnetic stripe reader is immediately suspect. In principle either the Square hardware or if track data is visible at all within the application, the associated mobile app could have rejected it right away.  Even further upstream, the payment processor often knows exactly what type of point-of-sale terminal exists at a merchant location– because the hardware is provisioned as part of a packaged offering from the processor. Arrival of track-data containing CVV3 from that merchant would then serve as strong signal of a problem if the merchant is known to be not capable of contactless payments. (One complication is whether card issuers have taken care to define different service codes for cards that have both traditional magnetic-stripe and IC component. If the standard magnetic stripe on the card has same service code as the one emulated by the chip, this heuristic fails. In that case additional heuristics can be used, such as presence of ATC or the appearance of CVV on two tracks, as opposed to track #2 only.)

One final point about replaying CVV3 track data: each protocol execution can only be used to create one valid magnetic-stripe. This is because of the ATC or transaction counter, which is incremented for each time the payment protocol is run. In principle that means the enterprising criminal has to capture multiple transactions when in the vicinity of the contactless card, and then go through the trouble of re-encoding the magnetic stripe with different track data between purchases. This is independent of any other restrictions the issuing bank may have around ATC that creates additional defenses against fraud. For example if the victim uses their card after skimming but before the attacker has gotten around to conducting a transaction. In that case all of the track-data captured by the attacker sports a counter less than the most recent one the bank has observed in a succesful transaction.

CP