One-time use credit cards and Google Wallet

Another day, another announcement of a payment service being shuttered. This time it is payments API for digital goods from Google Wallet. (Not to be confused with in-store NFC payments which is doing well, liberated from restrictions imposed by wireless carriers trying to push their competing ISIS/Software offering.) Business reasons for why a service does not achieve critical mass are often complex and subject to debate. But we can at least look at the difference in underlying technology between the discontinued service and related offering that is being actively developed: instant buy API.

As the documentation describes in passing, the instant purchase API is an interesting application of one-time use credit cards. This is an ancient idea  proposed many times in the past, while rarely implemented on a large scale for consumers. (Bank of America has recently tried its hand.) It has dual objectives for improving privacy as well as creating resilience against data-breaches affecting merchants such as recent Target and Home Depot debacles. Unlike a plastic credit-card which will always return the same card-number when swiped repeatedly, single-use cards are designed to produce a series of unique card data for each purchase. There is a range of possible definitions for “unique:” from accepting only one transaction with that particular merchant to being truly unlinkable in the sense that it is not possible for multiple merchants to pool together their information and realize that a series of transactions in fact belongs to the same user. (That latter goal is harder to achieve: in addition to using different card numbers, other metadata encoded on the card such as expiration, cardholder-name and CVV number must be diversified.)

Naturally achieving that affect with standard plastic cards using magnetic stripes is very difficult. Recently introduced programmable mag-stripe technology is in principle capable of doing this by varying the data encoded on the stripe, although none of the existing deployments have taken that route. NFC payments and chip & PIN protocols also fall short of “one-time use” by that strict definition. While the protocols provide replay protection– data collected by the merchant from the card can not be reused to make fraudulent transactions at another merchant– it is still the same card number that appears in all cases. Even EMV tokenization does not fix that problem either: there is a single unique identifier, different from the “real” credit-card number but still  links all transactions conducted by that card.

So how does Instant Buy achieve this objective? The API is aimed at merchants who want to accept payments from Google Wallet users for mobile web and application scenarios. That would have been just another proprietary payment option (or “rails” to use the common terminology) similar to PayPal. But key difference is that movement of funds proceeds through existing credit-card networks. When the user completes a purchase at an ecommerce site using this scheme, Google returns what looks like a credit card to the site containing the usual assortment of fields: card number, expiration, CVV2. That website can now go through its existing card processor for a routine card-not-present transaction, in exactly the same way they would if those same card details were entered manually into a checkout form.

If that sounds a lot like virtual-cards used for NFC payments in the Google Wallet proxy model, that’s because it is. Same TxVia-derived backend powers both solutions. The difference is that for NFC transactions, a single virtual-card is provisioned to the phone each time the wallet is initialized. That card has a generous expiration date on the order of years, because it is used repeatedly over the lifetime of the wallet. Each transaction still includes a unique dynamic CVV or CVV3 code to prevent reuse of card data but card-number as observed by the merchant itself is fixed. By contrast Instant Buy generates a unique MasterCard for each transaction with a relatively short expiration time specifically bound to that transaction.

That alone is valuable from a security perspective. It provides the same resilience against data breaches for online payments as NFC does for in-person payments at bricks & mortar retailers. The potential privacy benefit on the other hand is not realized in this scenario. Additional identifying data is returned to the merchant to help complete the transaction, including customer name, billing/shipping addresses.

Another caveat about transaction model: “one-time” is a slight misnomer and in fact would be an undesirable feature. For example when an order is returned, merchants need to refund all/part of original amount back on the card. Even fulfilling a single order may include multiple charges, as when multiple shipments are required to handle items temporarily out of stock. It is more accurate to view virtual-cards as being associated with a specific transaction as opposed to only being valid for one “payment” in the strict sense of card networks. As described in the virtual cards FAQ:

  • Transaction limit: order amount + 20% (to account for increased transaction size due to extra shipping charges etc.)
  • One time card allows for multiple authorization […]
  • Authorizations, captures, refunds, chargebacks etc. work as usual
  • The card expires 120 days after the end of the month when the transaction was initiated.

In other words, the system strikes a good balance between maintaining compatibility with existing credit-card processing practices used by websites, while offering improved security for users. By comparison, the deprecated digital goods API defined its own rails for moving funds, requiring greater effort integration as well as tight-coupling between Google and merchants.

CP

The challenge of cryptographic agility

Getting stuck with retro/vintage cryptography

Developers and system designers are overly attached to their cryptographic algorithms. How else to explain the presence of an MD5 function in PHP? This is not exactly a state of the art hash function. Collisions were discovered in the round-function as  early as 1996. An actual collision arrived in 2004 with the work of Wang et al. As usual one paper is never enough for the problem to register broadly. Many certificate authorities continued to use MD5 for issuing certificates, with the predictable outcome: a spectacular break in 2008 when security researchers were able to obtain a fraudulent intermediate CA certificate from RapidSSL.

Yet die-hard MD5 fans continue to lurk among the open-source development community. State-of-the art “code integrity” for open source projects often involves publishing an MD5 hash of a tarball on the download page. (That approach does not work.) The persistence of MD5 also explains why Windows Security team at one point had an official “MD5 program manager” role. This unfortunate soul was tasked with identifying and deprecating all MD5 usage across the operating system. It turns out he/she still missed one: Terminal Service licensing CA continued to issue certificates using MD5 until 2012, a vulnerability exploited by the highly sophisticated Flame malware believed to have been authored by nation-states.

How many hash functions?

It would be easy to chalk-up the presence of an MD5 function in PHP, to well, it being PHP. But it is not only language designers that make these mistakes. What about the Linux md5sum command-line utility for computing cryptographic hashes of files? Microsoft one-upped with the “file checksum integrity verification” or FCIV utility which can do not only MD5 but also the more recent vintage SHA1. Sadly for the author of that piece of software, SHA1 is also on its way out. While no one has yet demonstrated even a single collision, it is widely believed that attacks targeting MD5 can also be extended to SHA1. Chrome is trying phase-out SHA1 usage in SSL certificates.

One person cataloged 10 GUI-based applications for computing file-hashes. Several of these utilities also hard-coded MD5 and SHA1 as their choice of hash. Even the ones that decided to go all-out by adding SHA2 variants are now stuck: NIST has selected Keccak as the final SHA-3 standard.

The mistake is not limited to amateurs. Designers of the SSL/TLS protocols went to great pains to provide a choice of different algorithm combinations called “ciphersuites,” negotiated between the web-browser & web-server based on their capabilities and preferences. Yet until recent versions, the protocol also hard-coded a choice of hash function into the record layer, as well as into the client-authentication scheme. (MD5 and SHA1 concatenated together, just in case one of them turns out to be weak. There is an extra measure of irony here beyond the hapless choice of MD5: as Joux pointed out in 2004, concatenating hash functions based on the Merkle-Damgard construction– which includes MD5 and SHA1– does not result in a hash function with the naively expected “total” security of both algorithms added together.)

Seemed like a good choice– at the time

Lest we assume these failures are a relic of the past: the mysterious person/group who designed Bitcoin under the pseudonym Satoshi Nakamato also fell into this trap of hard-coding the choice of elliptic-curve and hash functions. And bad news is that choice is going to be a much harder to undo compared to introducing more options into TLS. After all, no one has to go back and “repeat” past SSL connections when an algorithm shows weaknesses. If Bitcoin ever needs to change curves because the discrete logarithm problem in secp256k1 turns out to be easier than expected, users will have to “convert” their existing funds into new coins protected by different algorithms. Since that involves extending the scripting language used to verify transactions, it will represent a hard-fork to the protocol.

It is unclear why Satoshi picked ECDSA over RSA. Space savings? But this is illusory; RSA signatures permit message recovery, allowing the signer to “reclaim” most of the padding taken up in the signature for storing additional information. Also ECDSA has the problem that signature verification is just as expensive as signing; see Crypto++ benchmarks as one data-point. Typically a transaction is signed once, but verified thousands of times by other participants in the system. RSA has just the right distribution of work for this: signatures are slow, verification is quick. Given that Bitcoin emerged around 2008, the Brainpool family of elliptic curves were relatively new at the time and probably considered a safe bet. Of course we can also imagine someone in 1995 deciding that this brand-new hash-function called “MD5” would be a great choice for basing their entire security on.

Cryptographic agility

The common failure in all of these cases is cryptographic agility. Agility can be described as three properties, in order of priority:

  • Users can choose which cryptographic primitives (such as encryption algorithm, hash-function, key-exchange scheme etc.) are used in a given protocol
  • Users can replace the implementation of an existing cryptographic primitive, with one of their choosing
  • Users can set system-wide policy on choice of default algorithms

It is much easier to point out failures of cryptographic agility than success stories. This is all the more surprising because it is a relatively well-defined problem, unlike software extensibility. Instead of having to worry about all possible ways that a piece of code may be called on to perform a function not envisioned by its authors, all of the problems above involve swapping out interchangeable components. Yet flawed protocol design, implementation laziness and sometimes plain bad luck have often conspired to frustrate that. Future blog posts will take up this question of why it has proved so difficult to get past beyond MD5 and SHA1, and how more subtle types of  agility-failure continue to plague even modern, greenfield projects such as Bitcoin wallets.

CP