Goto fail and more subtle ways to mismanage vulnerability response

As security professionals we are often guilty of focusing single-mindedly mindedly on one aspect of risk management, namely preventing vulnerabilities, to the exclusion of others: detection and response. This bias seems to have dominated discussion of the recent “goto fail” debacle in iOS/OS X and its wildly improbable close-cousin in GnuTLS. Apple has been roundly criticized and mocked for this self-explanatory flaw in SecureTransport, its homebrew SSL/TLS implementation. The bug voided all security guarantees the SSL/TLS protocol provides, rendering supposedly “protected” communications vulnerable to eavesdropping.

But much of the conversation and unofficial attempts at post-mortems (true to its secretive nature, Apple never published an official explanation, but conveniently created a well-timed distraction in the form of a whitepaper touting iOS security) focused on the low-level implementation details as root cause. Why is anyone using goto statements in this day-and-age, when the venerable Edsger Dijsktra declared way back in 1968 that they ought to be considered harmful? Why did they not adopt a coding convention requiring braces around all if/else conditionals? How could any intelligent compiler not flag the remainder of the function as unreachable code when the spurious goto statement was causing?** Why was the duplicate line missed in code reviews when it stands out blatantly in the delta? Did Apple not have a good change-control system for introducing code changes? Speaking of sane software engineering practices, how is it possible that code-flow jumps to a point labelled “fail” and yet still returns  success, misleading callers into believing that the function completed successfully? To step back one more level, why did Apple decide to maintain its own SSL/TLS implementation instead of leveraging open-source libraries such as NSS or openssl which have benefited from years of collective improvement and cryptographic expertise that Apple does not have in-house?

All good questions, partly motivated by a righteous indignation that such a catastrophic bug could be hiding in plain sight. But what about the aftermath? Once we accept the premise that a critical vulnerability exists, the focus shifts to response. Putting aside questions around why the flaw existed in the first place, let’s ask how well Apple handled its resolution.

  • There was no prior announcement that an important update was about to be released. Compare this to the advance warning MSFT provides for upcoming bulletins.
  • A passing mention in the release notes about the vulnerability, with an ominous statement to the effect that “an attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS.” Not a word about the critical nature of the flaw or a pleas for users to upgrade urgently. One would imagine that an implementation error that defeats SSL– the most widely deployed protocol for protecting communications on the Internet– and allows eavesdropping on millions of users’ traffic would hit a raw nerve in this post-Snowden world of  global surveillance. Compare Apple’s nonchalance and brevity to the level of detail in a past critical security update from Debian or even routine MSFT bulletins released every month.
  • The update was released on a Friday afternoon Pacific-time. This is the end of the work-week in Northern America, and well into the weekend in Europe. Due to lack of upfront disclosure by Apple, the exact nature of the vulnerability was not reverse-engineered publicly until several hours later. That is suboptimal timing to say the least for dropping a critical fix, especially in a managed enterprise IT environment with a large Mac fleet and security team tasked with trying to ensure that all employees upgrade their devices. (Granted, Apple never seems to have cared much for the enterprise market, as evidenced by weak support for centralized management compared to Windows or even Linux with third-party solutions.)
  • The update addressed the vulnerability only for iOS, leaving Mavericks, the latest and greatest desktop operating system vulnerable. In other words, Apple 0-dayed its own desktop/laptop users with an incomplete update aimed at mobile users. Why? At least three possibilities come to mind.
    1. Internal disconnect: Apple may not have realized the exact same bug existed in the OS X code base– but this is a stretch, given the extent of code sharing between them.
    2. Optimism/naiveté: Perhaps they were aware of the cross-platform nature of the vulnerability but assumed nobody would figure out exactly what had been fixed, giving Apple a leisurely time-frame to prepare an OS X update before the issue poses a risk to users. To anyone familiar with shrinking time-windows between patch release and exploit development, this is delusional thinking. There is 10 years worth of research on reverse-engineering vulnerabilities from patches, even when the vendor remains silent on details of the vulnerability or even existence of any vulnerabilities in the first place.
    3. Deliberate risk-taking / cost-minimization: The final possibility is Apple did not care or prioritized mobile platforms over traditional laptop & desktops. Some speculated that Apple was already planning to release an update to Mavericks incorporating this fix and saw no reason to rush an out-of-band patch. (Compare this to approach MSFT has taken towards critical vulnerabilities. When there is evidence of ongoing or imminent exploitation in the wild, the company has departed from the monthly cycle to deliver updates immediately as with MS13-008.)
  • No explanation after the fact about the root-cause of the vulnerability or steps taken to reduce chances of similar mistakes in the future. This is perhaps the most damning part. The improbable nature of the bug– one line of code mysteriously duplicated, looking so obviously incorrect on even the most cursory review– fueled much speculation and conspiracy theories around whether it had been a deliberate attempt to introduce a backdoor into Apple products. Companies are understandably reluctant to release internal postmortem out of fear that they may reveal proprietary information or portray individual employees in an unflattering light. But in this case even an official blog post summarizing the results of an investigation could have sufficed to quell  wild theories.

Coincidentally the same Friday this bug was exposed, this blogger gave a presentation at Airbnb arguing that OS X is a mediocre platform for enterprise security, citing lack of TPM, compatibility issues with smart-cards and dubious track record in delivering security updates. For the next four days of goto-fail fiasco, Apple piled on the evidence supporting that last point. In some ways the continuing silence out of Cupertino represents an even bigger failure to comprehend what it takes to maintain trust when vulnerabilities, even critical ones, are inevitable.

CP

** It turns out in this case the blame goes to gcc. By contrast MSVC does correctly flag the code as unreachable.

HCE vs embedded secure element: comparing risks (part I)

As described in earlier posts, Android 4.4 “Kitkat” has introduced host-based card emulation or HCE for NFC as a platform feature, opening this functionality up to third-party developers in ways that were not quite possible with the embedded secure element. In tandem with the platform API change, Nexus 5 launched without an embedded secure element, ending a run going back to the Nexus S where the hardware spec included that chip coupled to the NFC controller. Google Wallet was one of the first applications to migrate from using the eSE to HCE for its NFC use case, namely contactless payments.

An earlier four-part series compared HCE and hardware secure elements from a functional perspective, concluding that the current Android implementation is close (but not 100%) to feature parity with previous architecture when card-emulation route points to the eSE. The next set of posts will focus on security, looking at what additional risks are introduced by using HCE instead of dedicated hardware coupled to the NFC controller.

Another way to phrase this question: what did embedded SE buy in terms of security and what was lost when Android gave up on the SE due to opposition from wireless carriers? Can HCE achieve similar level of security assurance or are there scenarios inherently depending on special hardware incorporated into the device, regardless of its form factor as eSE, UICC or micro-SD?

Broadly speaking, there are 4 significant benefits ranging from the obvious to more subtle:

  1. Physical tamper resistance
  2. Reduced attack surface
  3. Taking Android out of the trusted computing base (TCB)
  4. Interface separation

Each of the following posts will tackle one of these aspects.

[continued]

CP

Chip & PIN, liability shift and the game of chicken (part II)

[continued from part I]

In a WSJ article, a representative from MasterCard describes the plan for incentivizing EMV adoption:

When the liability shift happens, what will change is that if there is an incidence of card fraud, whichever party has the lesser technology will bear the liability. […] So if a merchant is still using the old system, they can still run a transaction with a swipe and a signature. But they will be liable for any fraudulent transactions if the customer has a chip card. And the same goes the other way – if the merchant has a new terminal, but the bank hasn’t issued a chip and PIN card to the customer, the bank would be liable.

This is an interesting approach. It leaves the card-holder out of the equation– no pesky consumer protection agencies to worry about. Instead banks and merchants square-off against each other in a race to adopt EMV before the other party does, lest they be left holding the bag for losses.

While MasterCard representative quoted in the article disclaims any attempt to move liability around, there is no question that the proposed scheme amounts to disrupting the current equilibrium temporarily. The way dispute resolution for charge-backs is handled today, typically the merchant gets the benefit of the doubt for card-present transactions– in other words in-store payments when there is a signed receipt proving that the merchant performed due diligence to confirm the transaction. Conversely for card-not-present transactions, benefit of the doubt goes to the issuer and the merchant eats the fraud loss, which explains many of the misguided schemes such as Verified-by-Visa desperately trying to make a dent in the incidence of such fraud. For now CNP is unlikely to play much of a role in EMV adoption. From a technology stance, all of the elements are in place to enable NFC payments over the web using mobile devices/tablets. Yet business/regulatory hurdles remain before such systems can be deployed broadly.

With the new incentive structure proposed by the card networks, merchants may find themselves on the losing side of an unauthorized transaction dispute even for card-present transactions, if they are dealing with chip & PIN cards. (One amusing consequence may be that such customers become persona non grata; merchants may decline to accept cards with chip & PIN, although such discrimination would almost certainly run afoul of network regulations.) In theory this gives merchants incentives to upgrade their POS and payment processing systems, in order to maintain the status quo vis-à-vis issuers. Dangling before issuers on the other side is the lure of a temporary reprieve from card-present fraud. Any bank that issues chip & PIN cards may enjoy an advantage against merchants if the merchant still processed the transaction the old-fashioned way.

The problem is all such gains are temporary. In equilibrium, after both issuers upgraded all of their customers to carrying chip & PIN cards and all merchants terminals process payments via EMV protocols, the exact same liability regime from today is restored.

This leads to a bizarre state of affairs. In game theoretical terms, either merchants or issuers can benefit in the short run by adopting EMV first, before the other actor does. (This assumes that savings from fraud exceeds the capital investments required for upgrading, whether that means the cost of buying new POS hardware or reissuing new cards to existing customers.) Such benefits need not correspond to an actual decrease in fraud as experience by consumers. After all chip & PIN cards still have magnetic stripes, so they can be cloned for fraudulent transactions at merchants still relying on swipe technology. The operative question for the merchant/issuer is not whether fraud exists but who is picking up the tab. From that perspective, preemptive EMV adoption pays-off, leaving the “other” side on the hook. But once both sides have upgraded, that advantages vanishes.

Put another way, the card networks have almost set up a text-book experiment in behavioral economics. Crash upgrade to chip & PIN pays off unilaterally for each player as long as the other one has not upgraded but such benefits disappear once the opponent also upgrades. What is the rational choice in this situation? Racing to upgrade is no doubt the outcome the card networks are hoping for. In the short-term, merchants could pass on the capital investment to consumers in the form of higher prices. (It would be particularly amusing, and a certain measure of poetic justice, if a special surcharge applied to chip & PIN card payments only. But card networks would likely crack down on such blatant attempts to single-out EMV mandate for higher prices.) Curiously there is another equilibrium point: status quo. Both sides can delaying upgrades, betting that no one else is deploying EMV, and consequently there is no additional liability incurred from the redistribution mandate. Another wild-card here is how international transactions are treated. Even if US banks move slowly on issuing EMV cards, merchants can still be exposed to a significant downside from transactions involving cards from other countries. Card fraud is very much a global business. To the extent chip & PIN frustrates certain types of fraud in Europe, it has also served to redirect the criminals’ attention to the US market where it is easier to  monetize stolen EMV cards using traditional magnetic stripes. Cracking down on that by shifting liability to US merchants alone could be enough to tip the scales. Time will tell.

CP

Chip & PIN, liability shift and the game of chicken (part I)

The Target data-breach has resurrected interest in the deployment of chip & PIN technology in the US. Part of the EMV suite of protocols dating back to the 1990s, this scheme aims to supplement the ubiquitous magnetic-stripes on credit and debit cards with a small embedded chip, capable of providing greater resilience against common threats against payment systems, such as compromised point-of-sale terminals– what appears to have been the root cause for Target’s headaches.

While chip & PIN is common in Europe, it remains something of a rarity in the US, both on the issuer and merchant side. Few banks issue cards containing chips, a market niche limited to the travelers planning to spend significant time overseas where some merchants may not accept a signature-based transaction. The acceptance story for merchants is worse for understandable reasons: there is little incentive for merchants to undertake the cost of upgrading the installed base of readers. Chip & PIN cards still have a plain magnetic stripe on the back usable for traditional swipe transactions, which means that even merchants catering to tourists from abroad can continue to accept card payments without upgrading. (In fairness even an enthusiastic merchant could not upgrade unilaterally. There is usually a third-party payment processing service connected to those terminals and handling the back-end of transactions. Without upgrades in that system, installing new point-of-sale terminals is not enough.) No wonder that articles going back to 2001 bemoan the fact that all the sophisticated hardware going into chipped cards is mostly sitting idle.

Ironically contactless payments using NFC may have done more to facilitate the adoption of EMV protocols than chip & PIN. Despite being a newer technology, NFC has faced the exact same uphill battle for adoption because the incentives for issuers and merchants have been unclear. Issuers benefit by having less fraud in theory since NFC eliminates some of the weaknesses of traditional magnetic stripes– provided users are actually transacting over NFC instead of swiping the cards, which in turn a function of the installed base at merchants. So the issuer savings depend on merchant adoption rate. If merchants also stood to gain from increased NFC issuance, this circular dependency could have at least created a positive feedback loop. Yet none of the savings are passed on to merchants. They are still paying the same interchange fee for every payment transacted using the card networks; there is no discount over plastic for accepting NFC. At best one could argue that tap & pay transactions are faster than traditional swipe, which matters mainly for a small category of merchants who stand to gain considerably from shaving a few seconds from the time for serving each customer: coffee shops, fast-food outlets and similar high-volume, low-margin businesses looking to squeeze more orders per hour.

There is of course an undeniable PR/reputation gain from being on the cutting edge of new technology, and this applies to all actors in the system: issuer, merchants and card-holders. Google Wallet arguably provided some of that momentum, by packaging the technology in smart phone form-factor and appealing to technology savvy early adopters with a virtual card proxying transactions in real-time. But even that remained limited by the installed base of NFC readers, prompting Google to offer the same  virtual card in old-school plastic format.

Given that chip & PIN faces the same uphill battle, how will the card networks encourage adoption?

In the UK the answer was a unilateral mandate from issuing banks, accompanied by a liability shift. The banks adopted the convenient stance that because chip & PIN technology is so robust, any transaction authorized by PIN must have been carried out by the original card-holder. In case of disputed transactions, consumers are presumed guilty until proven innocent. Not surprisingly, this has lead to a strong backlash, coupled with a growing literature in security research suggesting that EMV protocols are far from being invincible; in fact basic design flaws allow fraudulent transaction without the PIN.

Either chastened by the contentious PR battle or perhaps reluctant to directly challenge protections afforded by federal laws around consumer liability, card networks have decided to take a different approach in the US: pitting merchants and issuers against each other.

[continued]

CP

Target breach, credit card security and NFC (part V)

[continued from part IV]

Damage control

First problem is that attackers are still limited in the number of purchases they can make with captured data. Each simulated track-data contains a counter known as ATC which is unique to a transaction, limiting this scam by the number of “extra” transaction performed at the malicious POS. Given that a full transaction requires on the order of ~100 milliseconds minimum– due to limited processing capabilities of smart card hardware, particularly for contactless transactions when all power required for computing must be drawn from the induction field– we are looking at not more than a dozen spurious protocol executions before either the checkout delays become suspicious or customers tire of holding their cards against the reader. Granted this may not be too much of a problem for the crooks interested in perpetrating fraud. Between issuer back-ends searching for anomalous spending patterns and card-holders noticing strange charges on their card, even old-school plastic card fraud may not get much far farther than a handful of unauthorized uses before it is caught.

When counters jump around

But the same transaction counter presents a different problem for attackers. To pick a concrete example, suppose that the card starts out with ATC at 10. The compromised POS carries out 5 NFC transactions, receiving track-data with ATC=11, 12, 13, 14, 15. One of these must be used to authorize the current purchase, with others saved for future fraudulent transactions.

Suppose the attacker submits ATC=15 to the issuer. The issuer will notice a gap, a sudden jump in ATC from the last seen value of 10 without any intervening values observed. This is often explained by incomplete or failed transactions where the card/terminal only complete some of the protocol steps. (ATC is incremented fairly on, typically during the execution of GET PROCESSING OPTIONS command) But when the remaining track-data samples are used in fraudulent transaction, the issuer will observe something even more strange: ATC out-of-order, with lower values of the counter such as 11 and 12 appearing at a later date than higher values. By itself this is not sufficient to deem the transaction fraudulent. Occasionally transactions get submitted in one large “batch” after a delay, instead of being submitted incrementally in real-time, especially when charge amounts are small. That could explain isolated instances of ATC appearing to jump back-and-forth over short periods, until missing values in the sequence are finally submitted to the payment processor. But the same pattern repeated over a longer stretch, with low ATC values continuing to appear after higher ones have been observed could be either a signal for flagging the transaction as suspicious or outright rejecting it as a processing error. In other words, the useful lifetime of information captured by the malicious POS has been bounded.

Race conditions

Exact same pattern occurs if the attacker chooses to submit ATC=10 to complete the original purchase at the compromised POS. In this case, remaining track-data can be used for fraudulent transactions in correct order, as strictly monotone increasing counter 11, 12 etc. But there is a catch: if the card-holder herself starts making additional purchases, the issuer will receive higher ATC values starting at 16, giving rise to the same out-of-sequence ATC signal. This creates a race-condition between the crooks and legitimate user: they need to quickly monetize the stolen information before actions by the card-holder renders that information useless because the ATC has advanced too far for the issuer to honor lower values. That does not render the attack impossible per se, but like all good mitigations, it raises costs for the attacker and blocks certain avenues for exploit. Much like a stolen OTP in attacks against two-factor authentication systems, captured track-data from NFC must be cashed in quickly or it may become worthless. Squirreling it away to resell on the black-market days or weeks later is no longer a viable strategy. (Note the extreme case of trying to win the race is a real-time relay attack. The attacker can have an accomplice stand in front of another NFC reader with a mobile device and simply relay APDUs to the victim card via the compromised POS. The unauthorized transaction takes place almost simultaneously or even before the intended one.)

CP

Target breach, credit card security and NFC (part IV)

Part III in this series sketched a picture of how the most basic EMV protocol over NFC– backwards compatible with the ubiquitous magnetic stripes– can resist passive attacks taking place at the point-of-sale. If the miscreants’ strategy involves capturing transaction data as it is passing through a compromised POS and trying to stash it away for future use, card issuers can combat such fraud by checking for replay indicators. What about more sophisticated attacks when the POS attempts to influence the exchange between NFC reader and card, or try to monetize the stolen data immediately?

The price is always right

Looking back at how track-data and CVC3 are computed during NFC payments, there are two inputs conspicuously absent from the exchange: price and merchant identifier. That means the emulated magnetic stripe is not in any way bound to a particular purchase or even a specific merchant. In fact it is relatively easy to verify the first part experimentally. When paying with Google Wallet in-store for a purchase having multiple items, you can initiate the tap & pay before the cashier has finished ringing up all of them– exactly as one could with traditional plastic cards. The POS will typically display an interim message indicating that it is still waiting for the final amount, but the buyers will not have to tap again or otherwise confirm the details of the transaction.

Repeated transactions

That suggests a new avenue of attack:

  1. Trick the cardholder into performing multiple NFC transactions.
  2. Use one of the resulting track-data to complete the intended purchase at the specific merchant whose POS devices have been compromised
  3. Stash track-data from others, encode them on plain magnetic-stripe cards and rely on backwards compatibility to monetize them via swipe transactions at other merchants. There is nothing in the track-data per se limiting its use to the original merchant or identical transaction amount.
  4. Profit. Granted, fraud detection by the issuer can still flag transactions as suspicious based on merchant/amount patterns but that is based on statistical models of cardholder behavior. There is nothing in the track-data itself that indicates an attempt to divert captured track-data from a different merchant.

Keeping count

What is the feasibility of carrying out such an attack? First note that the multiple transactions are required for this plan. As noted earlier, each emulated track-data contains an incrementing counter, the ATC which is authenticated by the CVC3 and allows the issuer to detect replay attempts. At least one transaction is required for the cardholder to complete the legitimate purchase and that can not be reused by the miscreants. Racing the merchant to using that data first requires that attackers already have a purchase lined up, greatly limiting their options– good for our defensive position. More significantly it means that the card-holder gets a declined transaction and the issuer sees a repeated ATC value  raising suspicion about what is going on. With full control over POS, the bad guys can try more subtle options where the compromised POS pretends that an authorization succeeded and prints out a receipt, without submitting anything to the payment processor. But such an attack equally risks being found-out quickly, due to accounting discrepancies; the merchant does not get paid.

Step #1 turns out not to be a major obstacle. As long as the card is in the induction field of the NFC reader, the reader can likely repeat payment protocol without additional user action required. Plastic-cards with NFC have no built-in clock or other means of detecting that a terminal is requesting multiple transactions in quick succession. Once the field is removed– powering off the chip inside the card which draws its current from that external field– and reintroduced, the card has no way to know whether it has been days or milliseconds since last activation.

Smartphone-based implementations do have access to an actual clock and could in principle detect such rapid-burst attempts.**  Yet they are typically configured to only require PIN or other user explicit user confirmation based on time intervals, rather than transaction count. As long as the user has “primed” the application in the last 15 minutes or other interval configured, there is no additional confirmation required for individual transactions to proceed. This is partially driven by a desire to optimize for usability and handle flaky readers: in case a transaction failed, consumers can try again by holding the phone against the reader, without fiddling with the screen yet again.

Instead what stops this attack from working is the way ATC (application-transaction counter) frustrates steps #2 and #3.

[continued]

CP

** Even this part is tricky when a mobile wallet operates in NFC card-emulation mode with a secure element directly talking to the reader, by-passing the host operating system. SE has no concept of wall-clock either and can only limit transactions based on a signal received from the host device. Mobile versions of NFC payment protocols such as Paypass make provisions for host to grant single-shot approval to the SE, instead of the more common model of allowing them indefinitely until the host revokes such permission.

Target breach, credit card security and NFC (part III)

[continued from part II]

Armed with the background from previous posts on NFC payment protocols and how the magnetic-stripe profiles for EMV protocols achieve backwards compatibility, we can now look at what would happen in a hypothetical Target-type attack. The threat model assumes that point-of-sale (POS) terminals at the retailer stores have been compromised by attackers. Consumers will be making payments at these registers using their contactless cards, using NFC instead of swiping.

NFC at point-of-sale

One important distinction: we have spoken of POS devices having NFC as an integrated capability. In practice these can be distinct pieces of hardware. For example tap-and-pay capability can be added to legacy terminals via an external accessory, connected to the POS over a serial link. This peripheral contains the NFC reader and has knowledge of the payment protocols. It is responsible for abstracting away differences and quirks between different card networks and emitting familiar track-data to the POS. This is part of the interoperability benefit of mag-stripe profile; the POS may not have been designed for NFC but the output from the attached peripheral is still looks indistinguishable from an old-school swipe reader.

One corollary is that the POS does not micro-manage the NFC protocol or specify what bits to transmit. Instead there is a higher-level command structure, with the POS signaling that is ready for payment and NFC reader handling all exchange with the card, returning track-data after completion. Compromising the POS software– as in the case of Target breach— does not automatically translate to running arbitrary code on the NFC reader or otherwise controlling NFC traffic. For the purposes of this discussion, we will assume the worst-case scenario and still consider attacks where NFC reader is behaving maliciously instead of following the expected protocol.

Preventing replay

Simplest attack involves capturing the emulated magnetic stripe and trying to reuse this for another transaction. There are two reasons this is unlikely to work. As noted earlier, that track data incorporates a challenge issued by the NFC reader, the so-called “unpredictable number” generated randomly. Captured data from compromised POS contains a CVC3 corresponding to the choice of UN dictated by the attached reader for that particular transaction. Trying to reuse the same information at another reader will fail unless the exact same UN is chosen, which is unlikely when these are generated randomly. Granted there have been reports in the literature of catastrophic failures of the random number generator in EMV hardware [Anderson et al.] in the context of ATMs.  Since attackers can choose which retailers to target as well as the where they will monetize the stolen information, they could carefully pick stores employing such flawed hardware in their NFC terminals, to maximize the chances that UN values involved  will be repeated with high-probability.

But there is a secondary mechanism to prevent such misuse: the application transaction counter or ATC. This value also appears in track-data and can be optionally incorporated into the CVC3 calculation. If ATC were not included in CVC3, then a repeated UN would permit replay. Attackers could even try to repeat the protocol multiple times with the card and sample multiple CVC3 values corresponding to different values of the UN. Fortunately “sane” configurations include ATC in the CVC3 computation. But there is another security check that is left up to the issuer: enforcing that ATC is indeed acting as a unique counter.

Keeping count

Suppose that our enterprising crooks also observed that NFC track-data can be encoded on a plain magnetic-stripe and swiped, due to the lenient behavior of payment processors and issuers. Could they create a new credit-card using the data captured from compromised POS and use it repeatedly for purchases?

If the issuer is not paying attention to the payment mode and willing to apply CVC3 validation logic to a swipe transaction, the cryptographic check will be satisfied. But the issuer will observe something strange: a repeated ATC value. When the consumer made a purchase at Target, suppose their ATC was at 10. Since it is supposed to increment for each transaction, the next time the issuer receives an authorization request containing NFC track-data, they would expect to see a value greater than 10. It could be 11 but it could also be 12 or higher due to incomplete transactions which caused ATC to be incremented without ever resulting in an authorization request sent to the card issuer. But observing the same ATC twice for two different transactions? There is no legitimate scenario for that. Provided the issuer declines a second transaction using the same counter, attackers can not monetize track-data passively captured at the POS.

Active attacks

Passive being the operative word in the previous statement. So far we have focused on attacks where the adversary watches an ordinary tap-and-pay transaction taking place, waits for the simulated NFC “track-data” to arrive at the POS, copies this information and exfiltrates it for future use. But it is not useful to to hypothesize new defenses without also allowing for the presumed attackers to adopt their strategy in response. Knowing that they are dealing with contactless cards, the adversary could also adopt a different strategy such as sampling multiple transactions. For example they could program their POS malware to instruct the NFC reader to repeat the transaction five times, ending up with not just one but five samples of track-data with different UN/ATC/CVC3 combinations. The final post in this series will discuss how such attacks can be frustrated by proper issuer configuration.

[continued]

CP

Target breach, credit card security and NFC (part II)

[continued from part I]

Mutatis mutandis

The key difference in the simulated magnetic-stripe from NFC payment protocols such as Paypass is that track-data artificially constructed this way is not completely static. It changes slightly for each transaction, based on the state of the card and inputs dictated by the reader. Security against compromised point-of-sale terminals– what happened at Target– depends partly on the protocol and partly on issuer-specific policies decided by individual banks.

Here is what does not change between transactions:

  • Credit-card number: These protocols do not implement one-time-use card numbers or any other scheme designed to hide the “real” card number from the merchant.
  • Expiration date: Ditto.
  • Card-holder name: This is the field used to print the customer name on receipts. For contactless payments often it is a requirement that issuers redact the field to a generic label such as “customer,” possibly as reluctant PR response to the hysteria around NFC skimming stories.  (Redaction has an unexpected interaction with certain retailers’ practice of asking for ID during check-out. If there is no name displayed by the register, what would you compare the name on the driver’s license against?)

Instead the variable fields are:

  • ATC (Application Transaction Counter): Incremented for each transaction, regardless of whether the payment was successfully authorized. The protocol defines exactly what point in a transaction ATC will be incremented. Usually it is not automatically done when the payment application is selected but
  • UN (Unpredictable Number): This is effectively a challenge issued from the NFC reader to the card. It is picked unilaterally by the reader.
  • CVC3, also known as dynamic CVC: This is the response generated by the card in response to the reader challenge, serving as an integrity check over the data. CVC3 is of variable length based on provisioning parameters, typically from three to five digits similar to the CVC1/2 format. It is a function of a fixed IV representing card data, UN, secret cryptographic keys provisioned on the card and optionally the ATC, depending on card configuration. (More precisely ATC is always included in the computation but it is set  to all zeroes when not in use.)

Armed with this outline and without going into the details of how CVC3 is computed– suspending disbelief for now that the cryptography is sound, never-mind that a 3-5 digit “integrity check” is subject to brute-forcing– we can begin to speculate on what would happen to a hypothetical customer paying via NFC at a compromised Target point-of-sale terminal.

Fungible payment modes

First surprise is that paying via NFC by itself does not confer immunity against attacks using other payment modes. One might assume that whatever attacks are carried out against a contactless card will at best allow making fraudulent payments at other NFC terminals, but not for plain swipe or card-not-present transactions on the Internet. (This would be a “reduction” of risk only to the extent that NFC deployment remains relatively rare, which is realistically true today.) But the premise is false. As explained in earlier posts, there is by-design interchangeability between simulated track data from NFC and ordinary magnetic stripes. It is possible to copy track-data containing ATC/UN/CVC3, encode it on a plain plastic card and use that in a swipe-transaction. This will look correct through the reader, terminal, payment processor, all the way to the issuer responsible for  authorizing the charge.

Whether or not the charge is accepted depends on issuer configuration. In addition to receiving track data, the issuer also receives additional meta-data about the payment indicating payment mode such as swipe, NFC tap or manually keyed-in (when the stripe is damaged for example) On top of that track-data includes a service code which is typically different between physical magnetic stripes and their NFC incarnation. Neither of these can be influenced by the customer doing the swiping. Payment mode is determined by the terminal, while tampering with the service code will invalidate the CVC. At least in theory, issuers can cross-check payment mode against specific fields of track-data, rejecting attempts to replay NFC transactions on plain plastic cards, even if they carry valid CVC3 codes.

But attacks dating back to 2012 demonstrated in the field that many issuers are not performing that check.** This is possibly because many terminals are incorrectly configured, reporting the wrong mode even under normal operation. This makes issuers highly reluctant to reject transactions unless the probability of fraud is high. (Keeping in mind that false negatives– rejecting valid transactions as fraudulent– has a tangible revenue impact for the issuer because they earn interchange fees from each payment.)

In summary we have to consider attack strategies which attempt to monetize captured data in any possible payment mode: other NFC terminals, plain swipe transactions and even online purchases.

[continued]

CP

** Interestingly the converse is also possible, as demonstrated by a former colleague of this blogger. That engineer scanned the static track-data out from a plastic card and programmed an NFC application to use that as a template, with clever tricks to prevent the CVC3 from overwriting any part of the static data. Surprisingly it worked for some combinations of card issuers and payment terminals.

Target breach, credit card security and NFC (part I)

This holiday season has not been kind to retailers. First Target experienced one of the worst data breaches in recent memory, with the damage toll continuing to rise and the scope threatening to expand. From 40 million initial estimates and only in-store purchases, the retailer added another 70 million online shoppers, also including email, phone numbers and email addresses in the compromised data. Then the upscale retailer Neimann Marcus jumped into the fray, announcing that it too had experienced an intrusion resulting in the loss of credit card data. Not to be outdone, the crafts supplier Michaels announced that it had detected a successful attack resulting in loss of credit cards.

The great chip & PIN diversion

In a subtle attempt to shift blame back to the credit-card networks, the Target CEO went on the record to praise the virtues of chip & PIN cards. Kim Zetter of Wired quickly pointed out a slightly inconvenient fact: Target rejected a program back in 2004 to upgrade point-of-sale terminals for accepting chip & PIN technology. It turns out there is even more recent evidence of how little Target cared about supporting new payment technologies: very few of their stores accept NFC payments. Contrast this with Walgreens, CVS and Whole Foods which accept contactless payments at most locations. This state of affairs casts some doubt on Target’s avowed commitment to chip & PIN; NFC is a bridge technology to full chip & PIN.  NFC-enabled debit/credit cards as well as their mobile incarnations such as Google Wallet implement simplified versions (“profiles” more accurately) of the same EMV protocols.

Would NFC have protected Target customers?

Answering this question requires a closer look at the protocol. At first blush it seems that one can achieve much better security with contactless payments, at least against the specific risk Target customers faced: compromised point-of-sale terminals. A plastic card is a passive, inert object encoding static data. By contrast NFC payments involve smart cards with embedded chips– in other words miniature computers– or even full-scale mobile devices such as phones. Being programmable environments, they can support elaborate payment protocols leveraging strong cryptography. The question is how far that promise is realized in current deployments.

Protocols on paper and in the field

EMV defines the umbrella standard for contactless payments, while each particular payment network has slight tweaks and a proprietary brand for their variant: Mastercard PayPass, Visa payWave, American Express ExpressPay and Discover Zip. These protocols are not exactly interchangeable, but they are designed for coexistence: it is possible to have a single card/phone contain both Mastercard and Visa payment instruments, with the POS selecting one based on some combination of user and merchant preference. For our purposes, the critical detail is that commonly fielded NFC systems in the US use a “mild” version of the full chip & PIN protocol, walking a fine line between remaining compatible with existing infrastructure and providing additional security features.

It’s all magnetic-stripes

Specifically these systems implement the magnetic-stripe profile of EMV protocols. They do not use the heavy-weight cryptography in full chip & PIN, such as static-data authentication or even more robust dynamic-data authentication using unique RSA keys. Instead they emulate an old-school magnetic stripe at the logical level. Emphasis on logical; not to be confused with programmable/dynamic stripe technology such as Coin, which does feature a physical incarnation of a magnetic stripe driven by chips embedded into the card. By contrast an NFC payment does not involve any object resembling a stripe being magnetically read. Physical characteristics of the communication between NFC reader and card look nothing like the act of swiping a plastic card: the induction field used for powering the circuitry embedded in the card, the specific frequency for transmission over the air (13.56Mhz) and data encoding.

Instead of low-level hardware characteristics, it is the data-format associated with magnetic-stripes being simulated. At the end of the exchange between card and reader, the reader constructs a result that looks similar to what a plain magnetic stripe reader might output after processing an old-school plastic card. For example there are two tracks of data, the first one contains  credit card number and expiration date, while the second track has a field reserved for the card-holder name, all of this specified by ISO/IEC 7813.

[continued]

CP

Meaningful choice: AppOps, Android permissions and clashing interests (part IV)

Picking up on the question raised in the previous post: users and software publishers often have conflicting interests– as in the case of Android permissions requested by applications for the purpose of monetizing user data. How does the platform owner arbitrate and who are they likely to side with? Do they enhance the platform with functionality such as AppOps that allow users to protect their privacy at the expense of developer revenue? The answer depends on the balance of power between these actors.

Market power: platform vs developers

One red herring we can rule out immediately: direct revenue for the platform provider does not enter into the picture. Google provides the application market Play Store for Android and takes a 30% commission from each install for paid software. But paid applications rarely have to request excessive permissions and spy on users; they already have a revenue source from distribution. The most egregious offenders when it comes to user tracking tend to be “free” applications trying to make money after the fact, by giving away the application and targeting advertising. Frustrating that model has no immediate impact on the bottom line; that revenue stream is not shared anyway.

Here are three scenarios, with different trade-offs faced by the platform provider.

New entrant

If the platform is late to the game as in the case of Windows Phone or otherwise struggling to gain traction or defend an eroding market position as with Blackberry, it is in desperate need of applications. In this scenario the OS provider will be doing everything it can to court developers, trying to entice them with incentives. Witness how Microsoft offered  cash incentives and heavy technical assistance for startups to port popular mobile apps to Windows Phone.

In this situation the platform owner needs developers more than they need the platform. Why waste time writing a Windows application– which will likely require C# and entirely new development environment that few people are familiar with– when there is a proven market for iPhone and Android apps? Given such tall odds of attracting developers in the first place, it is very unlikely that the platform owner is going to risk alienating them by building functionality that interferes with monetization to advance the higher cause of  user privacy.

Near-monopoly position

This is the direct opposite of the first case. In this hypothetical case, the platform is close to being the only game in town for developers. It may enjoy complete market domination– the way Windows held sway on PCs during the 1990s– or it could be that alternative platforms offer no viable path for monetization even if they have sizable market share. (To the extent that Linux was a viable alternative OS, it did not boast a healthy commercial software ecosystem comparable to Windows and Mac.)

This time around the tables are turned: software publishers need the platform more than the platform needs any one application. In this case the platform owner can afford to take a strategic, forward-looking approach to improving the platform, free from competitive pressure that developers might flee to an alternative market. “Doing right” by users and cracking down on questionable developer practices is a luxury that companies can afford. In this scenario one would expect to see stronger safeguards for privacy, willingness to crack down on deceptive practices and technical features that empower users to defend their own privacy interests against over-zealous developers.

Interestingly regulatory intervention tends to contribute to such initiatives. Regulators often scrutinize companies with dominant market positions, and make demands for improving the platform in ways that represent their policy initiatives. In some cases these demands arguably make the platform worse– as in the drive for stronger copyright enforcement. Other times such demands, often presented as veiled threats for direct legal action, improve user privacy or security by bringing these agendas to the table in ways that ordinary consumers can not and which the platform owner can not ignore.

Competitive market

At least in the world of mobile operating systems, the near-monopoly situation has not been observed for any sustainable period. While Apple had a significant head-start with the iPhone, Android quickly closed the gap and has since surpassed iOS in market share globally. Today most popular mobile applications target both iPhone and Android as practical requirement for adoption. (Granted there can be significant quality differences, with the Android version often appearing to be an afterthought or summer internship project.)

Such competition benefits users as the different platform duke it out over design, performance, features and hardware choices. But it also means that neither company has the liberty of instituting policies that prove popular with consumers while alienating developers. Google in particular faces a particular vulnerability: iPhone users have proved more willing to pay for applications, as well as generate more revenue per user. Android may dwarf the iPhone in sheer number of unit shipments, but it falls far short of the Apple alternative when it comes to supporting a healthy commercial ecosystem for mobile developers.

AppOps: siding with developers over users

Given the competitive dynamics of mobile operating systems, it is not too difficult to see why AppOps was yanked out of Android in a hurry, before it caused any confusion and fear with the  developer community. (It also suggests fancier alternatives to AppOps with fewer side-effects are unlikely to become part of official Android.) In choosing to side with developers over user privacy, the Android team proved they know all too well which side of their bread is buttered.

CP