Target breach, credit card security and NFC (part II)

[continued from part I]

Mutatis mutandis

The key difference in the simulated magnetic-stripe from NFC payment protocols such as Paypass is that track-data artificially constructed this way is not completely static. It changes slightly for each transaction, based on the state of the card and inputs dictated by the reader. Security against compromised point-of-sale terminals– what happened at Target– depends partly on the protocol and partly on issuer-specific policies decided by individual banks.

Here is what does not change between transactions:

  • Credit-card number: These protocols do not implement one-time-use card numbers or any other scheme designed to hide the “real” card number from the merchant.
  • Expiration date: Ditto.
  • Card-holder name: This is the field used to print the customer name on receipts. For contactless payments often it is a requirement that issuers redact the field to a generic label such as “customer,” possibly as reluctant PR response to the hysteria around NFC skimming stories.  (Redaction has an unexpected interaction with certain retailers’ practice of asking for ID during check-out. If there is no name displayed by the register, what would you compare the name on the driver’s license against?)

Instead the variable fields are:

  • ATC (Application Transaction Counter): Incremented for each transaction, regardless of whether the payment was successfully authorized. The protocol defines exactly what point in a transaction ATC will be incremented. Usually it is not automatically done when the payment application is selected but
  • UN (Unpredictable Number): This is effectively a challenge issued from the NFC reader to the card. It is picked unilaterally by the reader.
  • CVC3, also known as dynamic CVC: This is the response generated by the card in response to the reader challenge, serving as an integrity check over the data. CVC3 is of variable length based on provisioning parameters, typically from three to five digits similar to the CVC1/2 format. It is a function of a fixed IV representing card data, UN, secret cryptographic keys provisioned on the card and optionally the ATC, depending on card configuration. (More precisely ATC is always included in the computation but it is set  to all zeroes when not in use.)

Armed with this outline and without going into the details of how CVC3 is computed– suspending disbelief for now that the cryptography is sound, never-mind that a 3-5 digit “integrity check” is subject to brute-forcing– we can begin to speculate on what would happen to a hypothetical customer paying via NFC at a compromised Target point-of-sale terminal.

Fungible payment modes

First surprise is that paying via NFC by itself does not confer immunity against attacks using other payment modes. One might assume that whatever attacks are carried out against a contactless card will at best allow making fraudulent payments at other NFC terminals, but not for plain swipe or card-not-present transactions on the Internet. (This would be a “reduction” of risk only to the extent that NFC deployment remains relatively rare, which is realistically true today.) But the premise is false. As explained in earlier posts, there is by-design interchangeability between simulated track data from NFC and ordinary magnetic stripes. It is possible to copy track-data containing ATC/UN/CVC3, encode it on a plain plastic card and use that in a swipe-transaction. This will look correct through the reader, terminal, payment processor, all the way to the issuer responsible for  authorizing the charge.

Whether or not the charge is accepted depends on issuer configuration. In addition to receiving track data, the issuer also receives additional meta-data about the payment indicating payment mode such as swipe, NFC tap or manually keyed-in (when the stripe is damaged for example) On top of that track-data includes a service code which is typically different between physical magnetic stripes and their NFC incarnation. Neither of these can be influenced by the customer doing the swiping. Payment mode is determined by the terminal, while tampering with the service code will invalidate the CVC. At least in theory, issuers can cross-check payment mode against specific fields of track-data, rejecting attempts to replay NFC transactions on plain plastic cards, even if they carry valid CVC3 codes.

But attacks dating back to 2012 demonstrated in the field that many issuers are not performing that check.** This is possibly because many terminals are incorrectly configured, reporting the wrong mode even under normal operation. This makes issuers highly reluctant to reject transactions unless the probability of fraud is high. (Keeping in mind that false negatives– rejecting valid transactions as fraudulent– has a tangible revenue impact for the issuer because they earn interchange fees from each payment.)

In summary we have to consider attack strategies which attempt to monetize captured data in any possible payment mode: other NFC terminals, plain swipe transactions and even online purchases.

[continued]

CP

** Interestingly the converse is also possible, as demonstrated by a former colleague of this blogger. That engineer scanned the static track-data out from a plastic card and programmed an NFC application to use that as a template, with clever tricks to prevent the CVC3 from overwriting any part of the static data. Surprisingly it worked for some combinations of card issuers and payment terminals.

Target breach, credit card security and NFC (part I)

This holiday season has not been kind to retailers. First Target experienced one of the worst data breaches in recent memory, with the damage toll continuing to rise and the scope threatening to expand. From 40 million initial estimates and only in-store purchases, the retailer added another 70 million online shoppers, also including email, phone numbers and email addresses in the compromised data. Then the upscale retailer Neimann Marcus jumped into the fray, announcing that it too had experienced an intrusion resulting in the loss of credit card data. Not to be outdone, the crafts supplier Michaels announced that it had detected a successful attack resulting in loss of credit cards.

The great chip & PIN diversion

In a subtle attempt to shift blame back to the credit-card networks, the Target CEO went on the record to praise the virtues of chip & PIN cards. Kim Zetter of Wired quickly pointed out a slightly inconvenient fact: Target rejected a program back in 2004 to upgrade point-of-sale terminals for accepting chip & PIN technology. It turns out there is even more recent evidence of how little Target cared about supporting new payment technologies: very few of their stores accept NFC payments. Contrast this with Walgreens, CVS and Whole Foods which accept contactless payments at most locations. This state of affairs casts some doubt on Target’s avowed commitment to chip & PIN; NFC is a bridge technology to full chip & PIN.  NFC-enabled debit/credit cards as well as their mobile incarnations such as Google Wallet implement simplified versions (“profiles” more accurately) of the same EMV protocols.

Would NFC have protected Target customers?

Answering this question requires a closer look at the protocol. At first blush it seems that one can achieve much better security with contactless payments, at least against the specific risk Target customers faced: compromised point-of-sale terminals. A plastic card is a passive, inert object encoding static data. By contrast NFC payments involve smart cards with embedded chips– in other words miniature computers– or even full-scale mobile devices such as phones. Being programmable environments, they can support elaborate payment protocols leveraging strong cryptography. The question is how far that promise is realized in current deployments.

Protocols on paper and in the field

EMV defines the umbrella standard for contactless payments, while each particular payment network has slight tweaks and a proprietary brand for their variant: Mastercard PayPass, Visa payWave, American Express ExpressPay and Discover Zip. These protocols are not exactly interchangeable, but they are designed for coexistence: it is possible to have a single card/phone contain both Mastercard and Visa payment instruments, with the POS selecting one based on some combination of user and merchant preference. For our purposes, the critical detail is that commonly fielded NFC systems in the US use a “mild” version of the full chip & PIN protocol, walking a fine line between remaining compatible with existing infrastructure and providing additional security features.

It’s all magnetic-stripes

Specifically these systems implement the magnetic-stripe profile of EMV protocols. They do not use the heavy-weight cryptography in full chip & PIN, such as static-data authentication or even more robust dynamic-data authentication using unique RSA keys. Instead they emulate an old-school magnetic stripe at the logical level. Emphasis on logical; not to be confused with programmable/dynamic stripe technology such as Coin, which does feature a physical incarnation of a magnetic stripe driven by chips embedded into the card. By contrast an NFC payment does not involve any object resembling a stripe being magnetically read. Physical characteristics of the communication between NFC reader and card look nothing like the act of swiping a plastic card: the induction field used for powering the circuitry embedded in the card, the specific frequency for transmission over the air (13.56Mhz) and data encoding.

Instead of low-level hardware characteristics, it is the data-format associated with magnetic-stripes being simulated. At the end of the exchange between card and reader, the reader constructs a result that looks similar to what a plain magnetic stripe reader might output after processing an old-school plastic card. For example there are two tracks of data, the first one contains  credit card number and expiration date, while the second track has a field reserved for the card-holder name, all of this specified by ISO/IEC 7813.

[continued]

CP

Meaningful choice: AppOps, Android permissions and clashing interests (part IV)

Picking up on the question raised in the previous post: users and software publishers often have conflicting interests– as in the case of Android permissions requested by applications for the purpose of monetizing user data. How does the platform owner arbitrate and who are they likely to side with? Do they enhance the platform with functionality such as AppOps that allow users to protect their privacy at the expense of developer revenue? The answer depends on the balance of power between these actors.

Market power: platform vs developers

One red herring we can rule out immediately: direct revenue for the platform provider does not enter into the picture. Google provides the application market Play Store for Android and takes a 30% commission from each install for paid software. But paid applications rarely have to request excessive permissions and spy on users; they already have a revenue source from distribution. The most egregious offenders when it comes to user tracking tend to be “free” applications trying to make money after the fact, by giving away the application and targeting advertising. Frustrating that model has no immediate impact on the bottom line; that revenue stream is not shared anyway.

Here are three scenarios, with different trade-offs faced by the platform provider.

New entrant

If the platform is late to the game as in the case of Windows Phone or otherwise struggling to gain traction or defend an eroding market position as with Blackberry, it is in desperate need of applications. In this scenario the OS provider will be doing everything it can to court developers, trying to entice them with incentives. Witness how Microsoft offered  cash incentives and heavy technical assistance for startups to port popular mobile apps to Windows Phone.

In this situation the platform owner needs developers more than they need the platform. Why waste time writing a Windows application– which will likely require C# and entirely new development environment that few people are familiar with– when there is a proven market for iPhone and Android apps? Given such tall odds of attracting developers in the first place, it is very unlikely that the platform owner is going to risk alienating them by building functionality that interferes with monetization to advance the higher cause of  user privacy.

Near-monopoly position

This is the direct opposite of the first case. In this hypothetical case, the platform is close to being the only game in town for developers. It may enjoy complete market domination– the way Windows held sway on PCs during the 1990s– or it could be that alternative platforms offer no viable path for monetization even if they have sizable market share. (To the extent that Linux was a viable alternative OS, it did not boast a healthy commercial software ecosystem comparable to Windows and Mac.)

This time around the tables are turned: software publishers need the platform more than the platform needs any one application. In this case the platform owner can afford to take a strategic, forward-looking approach to improving the platform, free from competitive pressure that developers might flee to an alternative market. “Doing right” by users and cracking down on questionable developer practices is a luxury that companies can afford. In this scenario one would expect to see stronger safeguards for privacy, willingness to crack down on deceptive practices and technical features that empower users to defend their own privacy interests against over-zealous developers.

Interestingly regulatory intervention tends to contribute to such initiatives. Regulators often scrutinize companies with dominant market positions, and make demands for improving the platform in ways that represent their policy initiatives. In some cases these demands arguably make the platform worse– as in the drive for stronger copyright enforcement. Other times such demands, often presented as veiled threats for direct legal action, improve user privacy or security by bringing these agendas to the table in ways that ordinary consumers can not and which the platform owner can not ignore.

Competitive market

At least in the world of mobile operating systems, the near-monopoly situation has not been observed for any sustainable period. While Apple had a significant head-start with the iPhone, Android quickly closed the gap and has since surpassed iOS in market share globally. Today most popular mobile applications target both iPhone and Android as practical requirement for adoption. (Granted there can be significant quality differences, with the Android version often appearing to be an afterthought or summer internship project.)

Such competition benefits users as the different platform duke it out over design, performance, features and hardware choices. But it also means that neither company has the liberty of instituting policies that prove popular with consumers while alienating developers. Google in particular faces a particular vulnerability: iPhone users have proved more willing to pay for applications, as well as generate more revenue per user. Android may dwarf the iPhone in sheer number of unit shipments, but it falls far short of the Apple alternative when it comes to supporting a healthy commercial ecosystem for mobile developers.

AppOps: siding with developers over users

Given the competitive dynamics of mobile operating systems, it is not too difficult to see why AppOps was yanked out of Android in a hurry, before it caused any confusion and fear with the  developer community. (It also suggests fancier alternatives to AppOps with fewer side-effects are unlikely to become part of official Android.) In choosing to side with developers over user privacy, the Android team proved they know all too well which side of their bread is buttered.

CP

Meaningful choice: AppOps, Android permissions and clashing interests (part III)

[continued from part II]

Monetization

Here is a scenario when users and application developers have directly opposed interests, and the platform owner is put into a position of having to arbitrate between these two actors.

Suppose our hypothetical consumer comes across a new mobile application they consider useful. It could boost their work productivity, help connect with friends or provide some entertainment. This person wants to install and user the application on their phone to extract that “utility.” Meanwhile the company who authored that application is interested in deriving revenue from providing that useful application. In an ideal market, consumers can always negotiate and choose between:

  1. “Paid” version where the consumer pays upfront for installing the application and/or pays over time for continued access to the cloud service associated with that app. This is the more customary notion of payment where money exchanges hands.
  2. “Free” version funded by targeted advertising and user-data mining. As with other examples of seemingly “free” lunch, consumers pay dearly in privacy and attention, while avoiding monetary costs. This could involve collecting location information to display advertising based on geographic proximity to stores for example.

Developer’s ultimatum: take it or leave it

Indeed many application have this dual cost structure. Often there is even an upsell within the free version, giving users the opportunity to upgrade to paid version and get rid of advertising  nuisances. But sometimes the decision is made unilaterally by the application developer committing to one of the two models. There is no version of Facebook or Twitter where users can buy their privacy outright, paying to opt-out of data collection and behavioral advertising. For whatever reason Facebook and Twitter have decided that the optimal business strategy is to not charge users for accessing the service, but instead monetize their eyeballs indirectly via advertising. From a transactional perspective, there is nothing wrong or unfair per se about this, as long there is transparency into the terms of the arrangement. Most reputable software publishers will describe in painstaking terms exactly what nefarious data collection practices are implemented in their terms-of-use and privacy policies– which everyone clicks through without reading. Still all that legalese counts as notice, and clicking “I agree” constitutes informed consent on the part of the consumer.

Is there another option available to users dissatisfied with the status quo? There are two ways to interpret that question.

  • Capability, or “can it be done?” Can users continue to enjoy the “useful” functionality provided by an application while opting out of its harmful data-collection practices? This is a clear-cut question of facts, which depends on the nitty-gritty implementation details of  the platform/hardware where the app runs.
  • Ethics, or “should it be done?” Assuming we answer the first question in the affirmative, is it fair game for users to take such measures in the name of privacy “self-defense”? Many people would frown upon exploiting some trick to get paid applications for free. What about interfering with the business model of the developer by installing the application while disabling its ability to be monetized? Here we quickly get into gray areas. There is a  vocal group arguing that no one is hurt by pirating commercial software; surely blocking monetization can not be worse than outright stealing a copy of free (in the beer sense) software to begin with?

Off with their permissions

Android permissions and AppOps-type functionality goes to the heart of the first question. If users could start selectively disabling permissions, instead of having to agree wholesale to take-it-or-leave-it ultimatum from the application developer, they could have the best of both worlds. They can install and use it to get the benefit of the features without having to pay for the privacy harms caused by unwanted data collection.

As the second post pointed out, AppOps is not necessarily the ideal solution here. Bluntly taking away privileges can be disruptive, introducing new error cases into an application that were not  anticipated by its developers. The only way this could work is if the revoked privilege is rarely used or only used under user direction. For example an application may never resort to recording audio or taking pictures unless the user presses a particular button to invoke some specific feature. In that case removing its microphone and camera permissions will not interfere with ordinary usage, while guaranteeing that the application will not be snooping on conversations and scenes in its vicinity. Of course if the application is so predictable and well-behaved , there is no need for AppOps in the first place. Same improved privacy outcome can be obtained by not pressing that button. Meanwhile the more unruly applications that attempt to capture  audio/video randomly and never expect to get an access-denied error (because the developer declared upfront in the application manifest they need those permissions) will quickly run into trouble and crash.

Luckily there are more subtle ways to revoke access without breaking applications. Part I described one approach that creates the illusion of access granted– no unexpected “access-denied” errors to confuse the application– while returning either no data or completely fabricated bogus data which bears no relationship to the user. For Google it would be fairly straightforward to implement some variant of this as an improvement over AppOps. But would it be a wise move? The answer depends on which side one takes.

[continued]

CP

Meaningful choice: AppOps, Android permissions and clashing interests (part II)

Compatibility vs accuracy

Preceding post sketched a hypothetical solution to the AppOps compatibility problem by fabricating bogus data to appease an application when the user has declined permission. This approach does not complicate life for our (hypothetical) beleaguered developer: there are no security exceptions or bizarre error conditions introduced. All of the regular code paths in the application continue to work, only the data returned is “incorrect” in the sense that it does not correspond to the actual state of the world. This model is not entirely transparent; an application could discover that it is in fact in being lied to, depending on the model for generating the fake data. As the saying goes, liars need a good memory. If GPS is reporting a bogus location, this has to be self-consistent over time– no sudden teleportation across the globe– and also consistent with other signals such as IP address of wireless networks the device connects to. Skeptical applications could try to cross-check the reported location against such external indications and user behavior to detect fabrications– looking for restaurant recommendations in San Francisco when the location “reported” by the system is New York.

Still, the design subtly takes away the burden  from developers worried about missing permissions and shifts the balance of power back to users. They are now empowered to reject inappropriate permission requests– just like they could with the more blunt instrument of AppOps– without fear that tweaking permissions may break applications.

Business perspective

Why does Android not implement such a model where permissions can be declined without the unpleasant side-effects of AppOps? That question can not be answered at the technology level. It is more a function of competitive dynamics, of the delicate dance between platform providers vying for expanded market share (in other words Google) on the one hand, and software publishers writing applications that will make that platform more appealing.

Put bluntly, the interests of different actors are not always aligned.

  • Users: Users want to derive maximum benefit from their devices at lowest cost. “Cost” in this context includes not only direct monetary impact– upfront purchase price of phone, ongoing purchases for apps, services and content– but also intangibles such as privacy and quality of experience. For example maintaining control over personal information and not being subjected to intrusive advertising are equally relevant concerns.
  • Developers: Commercial developers are typically driven by profit maximization motives. In the short-term that could entail seemingly contradictory actions such as giving away content without generating any revenue. But these are best viewed as tactical steps supporting a long-term strategy focused on monetizing the user base. That monetization could take place directly by charging for the application and associated services (such as in-game purchases) or it could be indirect, when free content is subsidized by advertising.
    It should also be pointed out that this is not the only type of developer publishing mobile applications. Hobbyists and researchers can be far more interested in building reputation or releasing software out of altruistic motives.
  • Platform owner: Google for Android, but exact same calculus applies to Apple for iOS and MSFT on Windows Phone. Seeks maximum market share for its platform that is consistent with the licensing strategy. For example Android is given away for free to OEMs but there is a compatibility certification to qualify for using the trademark, as well as getting access to Google Play store. Apple by contrast uses an autocratic walled-garden model, where third-party hardware manufacturers can not build handsets using iOS. Microsoft used to charge for Windows Phone– inexplicable for an upstart OS trying to get traction– but that strategy may be changing.

Divided loyalties

Looking more closely at how mobile platforms gain market share, we find the platform owner with divided loyalties, trying to appease multiple constituencies. On the one hand it helps to do right by users. On the other hand, privacy and security considerations are only one of the factors contributing to such decisions. (In particular, it is an often lamented fact of life in the security community that better privacy does not help sell products the same way shiny features can. It remains to be seen to what extent the rude-awakenings inspired by Edward Snowden can change that.) There is an even bigger market distortion caused by the fact that users do not directly pick a platform. That decision is often made by other participants in the ecosystem, such as handset manufacturers who pick what OS to install on their phones– a decision the user can not easily override– and wireless carriers who wield influence by cherry-picking devices to run on their network and subsidizing/promoting the favored ones.

For all participants with a say on whether they will be using Android, iOS, Windows Phone or Blackberry, one important criteria is availability of applications. The greater the selection of apps available on that platform, the more attractive it becomes for users/carriers/handset-manufacturers. This in turn breeds a positive feedback loop: developers want to maximize their audience, so they will primarily target their applications at popular platforms.

In the next post we will argue that Android permissions– specifically ones allowing developers to monetize apps in ways users find objectionable– creates a textbook example of clashing interests between software publishers and users, forcing the platform owner to pick sides.

[continued]

CP