Looking back on the Google Wallet plastic card

[Full disclosure: This blogger worked on Google Wallet 2011-2013]

Google recently announced that its Wallet Card launched in 2013 will be discontinued this summer:

“After careful consideration, we’ve decided that we’ll no longer support the Wallet Card as of June 30. Moving forward, we want to focus on making it easier than ever to send and receive money with the Google Wallet app.”

This is the latest in a series of changes starting with the rebranding of original Google Wallet into Android Pay. It is also a good time to look back on the Wallet Card experiment in the context of the overall consumer-payments ecosystem.

Early example of Google Wallet card

Early iteration of the Wallet card

 

Boot-strapping NFC payments

The original Google Wallet launched in 2011 was a mobile application focused on contactless payments or colloquially tap-and-pay: making purchases at bricks-and-mortar stores using the emerging Near Field Communication wireless interface on Android phones. NFC already enjoyed support from payment industry, having been anointed as the next generation payment interface combining the security of chip & PIN protocols with a modern form factor suitable for smartphones. (Despite impressive advances in manufacturing every-thinner phones, it’s still not possible to squeeze one into a credit-card slot, although LoopPay had an interesting solution to that problem.) There were already pilot projects with cards supporting NFC, typically launched without much marketing fanfare. At one point Chase and American Express shipped NFC-enabled cards. That is remarkable considering that on the whole, banks have been slow to jump on the more established contact-based chip & PIN technology. NFC involves even more moving parts.  Not only must the card contain a similar chip to execute more secure payment protocols, but it requires an antenna and additional circuitry to draw power wirelessly from the field generated by a point-of-sale terminal. In engineering terms, that translates into more opportunities for a transaction to fail and leave a card-holder frustrated.

Uphill battle for NFC adoption

Payment instruments have multiple moving pieces controlled by different entities: banks issue the cards, merchants accept them as a form of payment with help from payment-processors and ultimately consumers make payments. Boot-strapping a new technology can be either an accelerating virtuous-cycle or stuck in a chicken-and-egg circularity.

  • Issuers: It’s one thing for banks to be issuing NFC-enabled plastic cards on their own, quite another for those cards to be usable through Google Wallet. After all the whole point of having a card with chip is that one can not make a functioning “copy” of that card by typing in the number, expiration-date and CVC into a form. Instead the bank must cooperate with the mobile-wallet provider (in other words, Google) to provision cryptographic keys over the air into special hardware on the phone. Such integrations were far from standardized in 2011 when Wallet launched, leaving customers with only two choices: a Citibank MasterCard and a white-label prepaid card from Metabank. Not surprisingly, this was a significant limitation for consumers who were not existing Citibank customers or interested in the hassle of maintaining a prepaid card. It would have been a hard slog to scale up one issuer at a time but an even better option presented itself with the TxVia acquisition: virtual cards for relaying transactions transparently via the cloud to any existing major credit-card held by the customer. That model wasn’t without its own challenges, including unfavorable economics and fraud-risk concentration at Google. But it did solve the problem of issuer support for users.
  • Merchants: Upgrading point-of-sale equipment is an upfront expense for merchants, who are reluctant to spend that money without a value proposition. For some being on the cutting edge is sufficient. When mobile wallets were new (and Google enjoyed ~3 year lead before ApplePay arrived on the scene) it was an opportunity to attract a savvy audience of early-adopters. But PR benefits only extend so far. Card networks did not help the case either: NFC transactions still incurred same costs in credit-card processing fees, even though expected fraud rates are lower for when using NFC compared to magnetic stripes which are trivially cloned.
  • Users: For all the challenges of merchant adoption, there was still a decent cross-section of merchants accepting NFC payments in 2011: organic grocery-chain Whole Foods, Peet’s coffee, clothing retailer Gap, Walgreens pharmacies, even taxicabs in NYC. But merchants were far from being the only limiting factor for Google. In the US wireless carriers represented an even more formidable obstacle. With Verizon, AT&T and T-Mobile having thrown in their lot with a competing mobile payments consortium called ISIS (later renamed Softcard to avoid confusion with the terrorist group) they lobbied to block their own subscribers from installing Google Wallet on their phones.

 

 

From virtual to physical: evolution of the proxy-card

Shut out of its own customers’ devices and locked in an uneasy alliance with wireless carriers over the future of Android, Google turned to an alternative strategy to deliver a payment product with broader reach, accessible to customers who either did not have an NFC-enabled phone or could not run Google Wallet for any reason. This was going to be a regular plastic card, powered by the same virtual card technology used in NFC payments.

For all intents and purposes, it was an ordinary MasterCard that could be swiped anywhere MasterCard was accepted. It could also be used online for card-not-present purchases with CVC2 code. Under the covers, it was a prepaid-card. Consumers could only spend existing balances loaded ahead of time. There was no credit extended, no interests accruing on balances, no late fees. It did not show up on credit history or influence FICO scores.

There would still be a Google Wallet app for these users; it would show  transactions and managing funding sources. But it could not be used for tap-and-pay. NFC payments— once the defining feature of this product— had been factored out from the mobile application, becoming an optional feature available to a minority of users when the stars aligned.

Prepaid vs “prepaid”

But there was one crucial difference from the NFC virtual-card: users had to fund their card ahead of time with a prepaid balance. That might seem obvious given the “prepaid” moniker, yet it was precisely a clever run-around that limitation which had made the Google Wallet NFC offering a compelling product. When users paid tapped their phone, the request to authorize that transaction was routed to Google. But before returning a thumbs up or down, Google in turn attempted to place a charge for the exact same amount on the credit-card the customer had setup in the cloud. The original payment was authorized only after this secondary transaction cleared. In effect, the consumer has just funded their virtual-card by transferring $43.98 from an existing debit/credit card, and immediately turned around to spend that balance to make a purchase which coincidentally was exactly $43.98.

Not for the plastic card: there was an explicit stored-value account to keep track of. This time around that account must “prepaid” for real, with an explicit step taken by the consumer to transfer funds from an existing bank-account or debit/credit card associated with the Google account. Not only that but using a credit card as funding source involves explicit fees to the tune of 2.9% to cover payment processing. (If the same logic applied to NFC scenario, $97 purchase at the cash-register would have been reflected as $100 charge against the original funding source.)

The economics of the plastic card necessitate this. Unlike its NFC incarnation, this product could be used at ATMs to withdraw money. If there were no fees for funding from a credit-card, it would have effectively created a loop-hole for free cash-advances: tapping into available credit on a card without generating any of the interchange fees associated with credit transactions. While having to fund in advance was a distinct disadvantage, in principle existing balance could be spent through alternative channels such as purchases from Google Store or peer-to-peer payments to other users. But none of those other use-cases involve swiping— which raises the question: what is the value proposition of a plastic card in the first place?

 

End of the road, or NFC reaches critical mass

In retrospect the plastic card was stuck in no man’s land. From the outset it was a temporary work-around, a bridge solution until mobile-wallets could run on every device and merchants accepted NFC consistently. That first problem was eventually solved by jettisoning the embedded secure-element chip at the heart of the controversy with wireless carriers, and falling back to a less-secure but more open alternative called host-card emulation. As for the second problem,  time eventually took care of that with a helping hand from ApplePay which gave NFC a significant boost. In the end, the plastic proxy-card lived out its shelf-life, which is the eventual fate for all technologies predicated on squeezing out a few more years out of swipe transactions, including dynamic/programmable stripes and LoopPay.

CP

Bitcoin’s meta problem: governance (part I)

Layer 9: you are here

Bitcoin has room for improvement. Putting aside regulatory uncertainty, there is the unsustainable waste of electricity consumed by mining operations, unclear profitability for miners as block rewards decrease and last but not least, difficulty scaling beyond its Lilluputian capacity of handling only a few transactions per second globally. (You want to pay for something using Bitcoin? Better hope not many other people have that same idea in the next 10 minutes or so.) In theory all of these problems can be solved. What stands in the way of a solution is not the hard reality of mathematics; this is not a case of trying to circle the square or solve the halting problem. Neither are they insurmountable engineering problems. Unlike calls for designing “secure” systems with built-in backdoors accessible only to good guys, there is plenty of academic research and some real-world experience building trusted, distributed systems to show the way. Instead Bitcoin the protocol is running into problems squarely at “layer 9:” politics and governance.

This last problem of scaling has occupied the public agenda recently and festered into a full-fledged PR crisis last year with predictions of the end of Bitcoin. Much of the conflict  focusing on the so-called “block-size”- the maximum size of each virtual page added to the global ledger of all transactions maintained by the system. More space in that page, more transactions can be squeezed in. That matters for throughput because the protocol also fixes the rate at which pages can be added, to roughly one every 10 minutes. But TANSTAAFL still holds: there are side-effects to increasing this limit, which was first put in place by Satoshi himself/herself/themselves to mitigate denial-of-service attacks against the protocol.

Game of chicken

Two former Bitcoin Core developers found this out the hard way last summer when they tried to force the issue. They created a fork of the popular open-source implementation  of bitcoin (Bitcoin Core) called BitcoinXT with support for expanded block size. The backlash came swift and loud. XT did not go anywhere, its supporters were banned from Reddit forums and the main developer rage-quit Bitcoin entirely with a scathing farewell. But that was not the end of the scaling experiment. Take #2 followed shortly afterwards as a new fork dubbed Bitcoin Classic, with more modest and incremental changes to block-size to address criticisms in XT. As of this writing, Classic has more traction than XT ever managed but remains far from reaching the 75% threshold required to trigger a permanent change in protocol dynamics.

Magic numbers and arbitrary decisions

This is a good time to step back and ask the obvious question: why is it so difficult to change the Bitcoin protocol? There are many arbitrary “magic numbers” and design choices hard-coded in the design:

  • Money supply is fixed at 21 million bitcoins.
  • Each block rewards the miner 50 bitcoins, but that reward halves periodically with the next decrease expected around June of this year
  • Mining uses a proof-of-work algorithm based on the SHA2 hash function
  • Proof-of-work construction encourages the creation of special-purpose ASIC chips, because they have significant efficiency advantages over using ordinary CPUs or GPUs that ship with off-the-shelf PCs/servers.
  • That same design is “pool-friendly:” its design permits the creation of mining pools, where a centralized pool operator coordinates work by thousands of independent contributors and distributes rewards based on share of work coordinated.
  • Difficulty level for that proof-of-work is adjusted roughly around ~2000 blocks, with the goal of making the interval between blocks 10 minutes
  • Transactions are signed using ECDSA algorithm over one specific elliptic-curve secp256k1
  • And of course, blocks are limited to 1MB in size

Where did all of these decisions come from? To what extent are they fundamental aspects of Bitcoin—it would not be “Bitcoin” as we understand it without that property— as opposed to arbitrary decisions made by Satoshi that could have gone a different way? What is sacred about the number 21 million? (It is half of 42, the answer to the meaning of life?) Each of the decisions can be questioned, and in fact many have been challenged. For example, proof-of-stake has been offered as an alternative to proof-of-work to halt runaway costs and CO2 emissions of electricity wasted on mining. Meanwhile later designs such as Ethereum tailor their proof-of-work system explicitly to discourage ASIC mining, by reducing the advantage such custom hardware would have over vanilla hardware. Other researchers proposed discouraging mining by making it possible for the participant who solves the PoW puzzle to keep the reward, instead of having it automatically returned to the pool operator for distribution. One core developer even proposed (and later withdrew) a special-case adjustment to block difficulty for upcoming change to block rewards. It was motivated by the observation that many mining operations will become unprofitable when rewards are cut in half, powering off their rigs and resulting in a significant drop in total mining power that will remain uncorrected for a significant time as blocks are mined at a slower rate.

Some of these numbers reflect limitations or trade-offs necessitated by current infrastructure. For example, one can imagine a version of Bitcoin that runs twice as fast, generating blocks every 5 minutes instead of 10. But that version would require each node running the software to exchange data twice as fast among themselves, because Bitcoin relies on a peer-to-peer network for distributing transactions and mined blocks. This goes back to the same objection levied against large-block proposals such as XT and Classic. Many miners are based in countries with high-latency, low-bandwidth connections such as China, a situation not helped by economics that drive mining operations to locate to the middle of nowhere, close to cheap source of power such as dams, but away from fiber. There is a legitimate concern that if bandwidth requirements escalate- either because blocks sizes go up or alternatively because blocks are minted more frequently- they will not be able to keep up But what happens when those limitations go away, when multi-gigabit pipes are available to even the most remote locations and the majority of mining power is no longer constrained by networking?

Planning for change

Once we acknowledged that change is necessary, the question becomes how such changes are made. This is as much a question of governance as it is of technology. Who gets to make the decision? Who gets veto power? Does everyone have to agree? What happens to participants who are not on board with the new plan?

Systems can be limited because of a failure in either domain. Some protocols were designed with insufficient versioning and forwards-compatibility; that means it is very difficult for them to operate in a heterogeneous environment consisting of “old” and “new” versions existing side-by-side. That makes it very difficult to introduce upgrades, because everyone must coordinate on a “flag-day” to upgrade everything at once. In other cases, the design is flexible enough to allow small, local improvements, but the incentives for upgrading are absent. Perhaps the benefits for upgrade are not compelling enough or there is no single entity in charge of the system capable of forcing all participants to go along.

For example, credit-card networks have long been aware of the vulnerabilities associated with magnetic-stripe cards. Yet it has been a slow uphill battle to get issuing-banks to replace existing cards and especially merchants to upgrade their point-of-sale terminals to support EMV. Incidentally that is a relatively centralized system: card-networks such as Visa and MasterCard sit in the middle of every transaction, mediating the movement of funds from the bank that issued the credit-card to the merchant. Visa/MC call the shots around who gets to participate in this network and under what conditions, with some limits defined by regulatory watch-dogs worried about concentration in this space. In fact it was their considerable leverage over banks/merchants which allowed card networks to push for EMV upgrade in the US, by dangling economic incentives/penalties in front of both sides. Capitalizing on the climate of panic in the aftermath of Target data-breach, these networks were able to move forward with their upgrade objectives.

[continued in part II]

CP