Credit card fraud and photo ID

Does asking consumers for photo ID before accepting a credit card purchase help reduce fraud? Perhaps but the question is moot because doing that is a violation of the card network-merchant agreement, as made clear in a series of Consumerist posts. Actually it is more subtle: Visa and Mastercard make this a bit tricky for merchants. They are allowed to ask for identification, but they can not decline a purchase if the shopper refuses to show one.

This convoluted compromise must have been the result of conflicting incentives between the network and the merchants, with Visa/MC deciding to make a concession. On one side, merchants want to reduce the possibility of fraud because they risk getting stuck with charge-backs. Strangely enough bricks-and-mortar stores are in a better situation here because the would-be-criminal must walk in the door, produce the card and risk having their mugshot appear on hundreds of surveillance cameras. That’s much higher bar than the card-not-present or “CNP” type of transactions such as mail-order and Internet where the physical possession of the card can not be verified. The plastic is reduced to a bunch of easily phished digits and the fraudsters can sit comfortably in a different jurisdiction halfway around the world. Without proof that the customers was in possession of the card, the merchant is forced to issue the charge-back and absorb the loss. (In principle a bricks-and-mortar retailer is off the hook with a signed receipt; the issuing bank eats the loss.)

On the other side, card networks want to make the purchase experience as convenient and hassle-free as possible for card holders. Any transaction that does not complete is revenue missed out on the interchange fee. The card network recognizes that downside and does not want the merchant to arbitrarily prevent shoppers from using the card.

The result is the current mess: “you can ask for  ID but you can not require it.” This is banking on consumer ignorance or cooperation: the assumption is most people will either not know the rights granted by the merchant agreement or will simply choose to cooperate as the path of least resistance. If that is the plan, retailers need to do a better job at educating employees. More incidents of consumers threatened and cards confiscated can only lead to greater awareness, upsetting this uneasy truce.

cemp

Game theory, vehicle weight and safety

April 22nd was Earth Day and this prompted a good deal of reflection in the mainstream media about the problem of climate change and curbing emissions. One of the readily identifiable culprits for carbon emission is transportation. In the US it makes up for 25% of all CO2 emissions. This is also by far the easiest one to inspire ranting and hand-wringing because it is at least partially a direct reflection of individual choice. (Strangely enough the amount of energy used to warm the McMansion through a cold Northeast winter is somehow considered a fixed variable while SUVs continue to inspire wrath for their inefficiency.) There is increased focus on fuel efficiency and vehicle choice. For the first time, the issue has moved beyond moral overtones of saving the environment to simple economics: when gasoline is pushing $4/gallon fuel economy suddenly begins to make sense. US automobile manufacturers have been trying to outdo one another in green-washing their mediocre track records by embracing the language of virtue. Congress recently weighed in and decided to up the CAFE standards frozen since the 1980– a change the said manufacturers vehemently opposed.

Fuel economy is just one factor determining vehicle choice. (In fact it is also just one factor determining carbon emissions: the gas guzzler driven 5K miles a year still emits less than the hybrid driven 30K miles. The Earth does not care about the efficiency of pollution, only the total amount of carbon emitted.) New York Times had an interesting article in the April 13 Sunday paper titled Tiny saves gas, but big can save lives looking at the relationship between weight and safety. One of the standard arguments reliably trotted out by talking heads against CAFE standards is that any “artificial” increase in fuel economy beyond what the market itself has generated will force consumers into driving tiny, unsafe death traps. This argument is clearly bogus for a number of reasons– vehicles with the same weight drastically differ in fuel economy, so even if weight was the only factor determining safety, improvements in aerodynamics, engine and transmission would increase MPG while keeping the weight constant.

Nevertheless the NYT article does look at the underlying premise (“weight equals safety”) and confirms that it is for the most part correct, with some exceptions. It also dispels some of the misconceptions around comparison across categories. Commercials often feature glowing results from crash-tests run by the Insurance Institute for Highway Safety. Small cars such as the Smart Fortwo and Honda Fit do very well here, often getting five-star rating. But as the article points out:

“In frontal crash tests the vehicles can be compared only against other vehicles of similar size and weight. That’s because in a frontal crash test the vehicle hitting a barrier provides the amount of striking force.”

In other words a 5-star rated small car is not necessarily better than a 3-star rated heavy one. This is basic physics: when two objects collide, the lighter one experiences greater acceleration forces. Side impact is a different story however:

“The impact comes from a ram that strikes the car. Because the striking force is the same for each test it is possible to compare vehicles of different sizes.”

In this case the Honda Fit with the “good” rating is a true improvement over the much heavier Ford Crown Victoria getting the “marginal” rating.

Because the safety against frontal impact remains a function of weight, this leads to a simple game-theory problem. If everyone had a vehicle of the same weight, then fuel economy could be tackled independently of safety. And arguably reducing that weight would improve safety because lighter vehicles are less likely to damage each other in a collision. But this is not a stable situation: anyone with a heavier vehicle will gain a competitive advantage over other drivers. The advantage is short-lived because others will reason the same way and opt for heavier automobiles. This is the prisoner’s dilemma on a large scale: if everyone “cooperated” by opting for low-weight, they would be better off in terms of safety. But avoiding the sucker payoff (namely being disadvantaged by having a lighter vehicle than the neighbors) creates an incentive for going with the heavy-weight option and when everyone does that they are all worse off collectively. The result is that any equilibrium state likely to emerge will be one where everyone is driving the heaviest possible vehicle they could have– assuming safety is the only criteria. It is not, and the increasing price of fuel is now pulling the market to a different equilibrium point. When most vehicles on the road become lighter simply as a result of high oil prices, the false dichotomy between fuel efficiency and safety may finally disappear from the repertoire of excuses for inaction.

cemp

Dow at 36000 vs crude oil at $200

Book titles are meant to be provacative. During the dot-com boom when no company that had the word “internet” in its business plan could go wrong, two authors published Dow 36000: The new strategy for profiting from the coming rise in the stock market. This widely optimistic prediction became a Business Week best-seller. But the authors’ optimism was topped by another book called Dow 40000. That was still doom-and-gloom compared to Dow 100000 which trounced all competitors in the Business Week review aptly titled Talk about throwing the bull. Eight years after the bubble burst, the authors continue to provide amusement. Today the Dow Jones index hovers around 12750, about one third of the conservative 36K estimate while the book itself managed a whopping one-and-half start from a total of three reviewers on Barnes and Noble website.

Yet one equally outrageous prediction may be coming reality faster than expected. In February 2006 Steven Leeb and Glen Strathy published The coming economic collapse: how you can profit when oil costs $200 a barrel. Leeb is no stranger to playing the virtual Casandra when it comes to apocalyptic renditions of the peak oil theory. One year earlier he had co-authored Oil factor: protect yourself and profit from the coming energy, so the newer publication could be considered variation on a theme. But prominently committing to a specific price point in the title is a bold move– and as the authors of the Dow 36000 theory discovered, one that can make you look foolish quickly.

At the time light crude oil traded around $70, roughly where it had settled after the supply crunch following Hurricane Katrina. That was already considered exorbitant, prompting consumer indignation and posturing from aspiring politicians eager to call oil companies on the carpet for alleged price-gouging. Of all place in the US where cheap oil and even cheaper gasoline are considered foundational pillars, the prediction that this price would triple not as a temporary spike but a stable long-term inevitability would have been heresy.

Crude oil has recently cleared $125/barrel. This is not simply seasonal variation and the much maligned summer driving season. Between May 2007 and May 2008 alone it has more than doubled. The long standing weakness of the US dollar has played a part. But Goldman Sachs predicted the situation is not about to improve: one report recently set a target of $150-$200 barrel. For the authors of the oil-collapse book, it may be time for a revised second edition because their predictions will be given a reality check much sooner than expected.

cemp

Level field for online games and hidden agendas

Security often makes for a convenient excuse for hidden agendas. An article from Gaming Today looks at the possibility of officially sanctioned modifications to the gaming console XBox 360 and concludes on a pessimistic note. Quoting a group manager from the XNA initiative:

“I’m a little disturbed when I think about other systems and people using what we call native code – code that goes right down to the metal – and then allowing people to run script mods on top of that without the right security measures. It could be really dangerous.”

This is no doubt a thinly-veiled reference to the Sony Playstation 3, which makes it very easy to install a different operating system: it’s there on the UI. Is that a dangerous security vulnerability? To answer that question, a different question must be posed: security of what? Dangerous to which persons?

It is well-known that one of the reasons for trying to lock down the hardware is that the consoles are sold at a loss. The revenue from games and additional services is expected to recoup that loss and move into the black on the balance sheet. If users could install Linux on the console, then they would have simply acquired a very capable general purpose computer on the cheap and opted out of the gaming ecosystem. This is a security problem all right, but it is the security of the revenue stream. It is not about user data.

Once that is acknowledged, the discussion quickly turns to the other bogey-man: cheating at online games. This is about to become the fifth horseman of the digital apocalypse, riding the coat-tails of rampant P2P content piracy horse. Stopping piracy was one of the arguments for closing down the open PC architecture and replacing it with the Trusted Computing Group vision where remote attestation capabilities would force users to run the “approved” software. Preventing cheating at online games falls into the same category and has also been used as an example for the benefits of attestation. If all the users in a multi-player game can prove they are running the official game software– instead of one tweaked and perhaps modified with cheating aids– the playing field is leveled. The same argument could be made in favor of locking down gaming consoles. Unofficial software can give the player an unfair edge; by design, each player’s computer receives a lot of information that the game keeps hidden, such as the location of other players. A modified client would not respect the rules of the game and could “see through walls” so to speak. Gary McGraw has done a lot of work on exploiting online games such as on Second Life. This is an example of what happens when game modifications are easy (because the software runs on PC) and the designers failed to appreciate the fact that user’s machine is outside the trust boundary.

This becomes the last refuge for arguing that modifications to a console pose a security threat. But even this is qualified: a user modifying their console at home is not a threat to the gamer ego of anyone else until they connect to a multi-player game. In that case the problem could have been framed as detecting modded consoles as opposed to preventing modifications in the first place, which also happens to be an easier problem.

cemp

Giga-pixel aerial imaging

Courtesy of a Google News Alert on the keyword “surveillance.”

Semi-professional digital SLRs have recently broken the ten megapixel barrier and very high-end models reach upwards of twenty MP. Impressive for printers but they can not even approach the gigapixel sensor described in this article. Don’t expect to find it at the local electronic retailer: it is designed for ISR (intelligence, surveillance, reconaissance) applications. In other words, this is the next generation eye in the sky. Mounted on a gyroscopically stabilized platform with 6 axis, this system boasts four focal planes with 92 five megapixels sensors on each to provide sixty-degree field of view at a resolution of 15cm on the ground. Dubbed ARGUS-IS, the design is as much an information processing marvel as it is an optical one: those sensors generate vast amounts of data, carried around by the same type of fiberoptic cables comprising the Internet backbone and compressed on board the airplane before being transmitted to the downlink through a broadband channel approaching 300 Mbps.

If the trickle-down effect holds for surveillance technology, there will be some traces of this in consumer electronics one day.

cemp

Cross-platform vulnerabilities: revisiting the mono-culture risks

One of the CNet articles covering the 2008 RSA conference makes a new point about the competitive standing between the different operating systems: namely it may not be the OS itself that matters at this point. The author Tom Krazit argues in “Mac Security Not So Much About the Mac” that as the operating systems have been hardened, threats moved up the stack to applications running on top of the platform, which are often written by vendors with no connection to the OS vendor:

“At the CanSecWest conference, no one was able to take control of three laptops in play (the MacBook Air, a Fujitsu running Windows Vista Ultimate, and a Sony Vaio running Ubuntu) when attacks were confined just to the operating system. But Miller’s Safari exploit, and the Flash flaw later exploited by Shane Macaulay, Derek Callaway, and Alexander Sotirov on the Vista laptop, show how security threats are now much more focused on the browser, rather than the operating system.”

The comparison is not quite accurate because Safari is written by Apple and distributed aggressively, including the recent 3.1 update forced on all Windows iTunes users who may have expressed no interest in having yet another web browser. Flash on the other hand is now associated with Adobe after its acquisition of Macromedia. No connections to MSFT there, and in fact they are arguably competitors. (Over the years, Flash emerged as a successful new platform on top of web browsers for delivering rich client experiences; something Java attempted with much fanfare before it flamed out and Sun re-focused its efforts on the enterprise market. More recently MSFT has positioned Silverlight as an alternative to Flash to regain developer mind-share.) Safari is a part of the Apple platform as much as Internet Explorer is rightly considered a part of the operating system; the latter was a central argument in the bundling question from the DoJ anti-trust trial of the late 1990s. This would not be the first time that Flash caused problems; for example its deliberate opening of backdoors in the same-origin policy and flawed implementation of controls  for the backdoor (namely the well documented over-zealous desire to see a cross-domain policy in any conceivable piece of random data) lead to significant problems for web sites in the past.

Still there is an interesting connection between this observation and the mono-culture argument from 2003. Flash-back: a group of security professionals including Bruce Schneier,  Dan Geer and Peter Gutmman co-authored a position paper titled Cyberinsecurity: cost of monopoly. Subtitled “How the dominance of Microsoft’s products poses a risk to security” the paper argued that having one operating system running on large number of machines created a single point of failure that provided attackers with an easy way to take out a large fraction of infrastructure by exploiting just one vulnerablity. No good deed goes unpunished: Geer was summarily dismissed (“promoted to customer”) from @Stake, which at the time had a business relationship providing auditing and penetration services to Microsoft.

Machines getting 0wned thanks to cross-platform extensions such as Flash pose a challenge for the mono-culture argument. After all one of the benefits of Flash, like its predecessor Java before, is to write portable code that works in any web browser on any platform. But this also opens up the possibility of cross-platform vulnerabilities. Not all of the code for Flash will be shared between say a Mac/Firefox version and the Window/IE7 version. But at least some critical components are: for example recently bugs were discovered in the regular expression engine affecting all platforms. The irony is that even when the installed base of operating systems diversified, a middle-layer designed to bridge the differences between these platforms will create similar risks as a mono-culture. The existence of such a middle-layer is a guaranteed by market conditions, whether it is Java, Flash or Silverlight. It is not economical for developers to target code to every possible hardware, OS and browser combination. An intermediate layer gives up some power and expressiveness that could have been achieved with code “native” to a specific platform, but in return promises greater reach across all plaforms. The mono-culture agreement taken to its logical conclusion would suggest not all users must have Flash: some should have Silverlight only and perhaps others rely on Java for rich-client experiences. (It’s not enough to also install the others; since the presence of the extension is enough to make it exploitable.) At this point it is running against market dynamics.

cemp

Clean coal, 2+2=5 and other delusions

The public relations salvo against global warming legislation is already underway, even before any concrete proposals were introduced in either the House or Senate. Washington Post notes that a group backed by the coal industry is spending $35M on a new ad campaign in primary and caucus states to spread the message that coal is a clean fuel. With the appropriately Orwellian name of Balanced Energy Choices (similar to how the campaigns against raising fuel economy standards used to be called  “Concerned/Anguished/Distraught Citizens for Vehicle Choice”) the TV spots use the catchy image of a power cable being plugged into a lump of coal. True enough considering that 50% of US power generation capacity comes from coal, and it is the one fuel that the world is not in any danger of running out anytime soon. The remainder is at best disingenuous: as the Post article points out, the definition of “clean” conveniently excludes carbon emissions.

Strangely the message has not made it very far online: Googling for clean coal will not return any top matches related to the slick campaign website and the commercial itself that praises the virtues of energy security. Not even a sponsored result. Instead the collective wisdom of the web responds with a balanced perspective on technologies such as IGCC that promise to extract comparable energy with a fraction of the emissions associated with directly burning the fuel. One of the hits points to an article from last year’s Sierra Club magazine and another one on the second page finds a blistering indictment of the concept from Washington Post op-ed side. That’s not exactly a success story, considering the commercial spots were produced by the same company responsible for the “what-happens-here-stays-here” themed advertising for Las Vegas.

cemp

The future of diesel: still cloudy

Treehugger looks at the possibility of diesel becoming more popular in the US for mainstream automobiles. After a bad experiment in the 1970-80s, diesel cars were relegated to niche status with only a handful of manufacturers, most notably Volkswagen, continuing to produce them for passenger cars. Many diesel models manufactured for sale in Europe were never imported states-side and large trucks for commercial use remained the primary application owing to better fuel-economy, reliability and cost factors. As diesels progressed far beyond their bad reputation for noise and soot, environmentalists continued to gripe about this state of affairs.  Some continued to pin their hopes on a diesel revival for reducing carbon emissions and because these engines can be converted to run on biodiesel mixtures, including 100% blends of used vegetable oil. Occasional success story, no matter how far removed from the mundane world of passenger cars, such as Audi winning 24 hours of Lemans in 2006 with a diesel race car, kept these hopes alive.

But the current prospects are not good. California tightened emissions standards related to sulfur in diesel, which restricts the type of fuel that can be used legally. More importantly the price difference between gasoline and diesel inverted: it is now more expensive to buy diesel. This was an abrupt change.

“Over the past year, the average price of diesel in America has risen by 117%—twice as fast as petrol. While both carry the same taxes in America, diesel now costs 60 to 70 cents a gallon more than regular gas. […]”

At least some economists are expecting this to increase to the point of canceling out the improved mileage from pure cost point of view. (Reduced carbon emissions remains as a benefit.) Meanwhile the cutting edge for high efficiency vehicles appears to be concentrated on gasoline-electric hybrids or fully electric vehicles, even though a few diesel-hybrids are in the works. Diesel just may become another beta-max: a better technology whose time never comes because of market quirks.

cemp

E-voting: how not to save money with IT

White-papers are full of case studies on how the judicious use of information technology can help organizations achieve more with fewer resources. Unfortunately for the state of Maryland, their brief experiment with electronic voting and Diebold touchscreen devices will not be one of them. My friend Kim Zetter has recently published a new article over at the Threat Level blog about the aftermath of the Maryland debacle. Sanity prevailed after a brief experiment with touch-screen voting that basically catalyzed the movement against direct recording electronic (DRE) machines and catapulted Diebold into the national limelight as the #1 enemy of fair elections. The state has gone back to optical scan machines, while the expensive equipment gathers dust but Diebold continues to collect on the maintenance contracts for equipment that is only trustworthy enough for electing the high-school mascot.

One of the interesting points in the article is that the machines are high maintenance. Quoting Rebecca Wilson of the Maryland based advocacy group SaveOurVotes.Org:

“They take up huge amounts of warehouse space in warehouses that need to be air-conditioned,” she continues. “They have to recharge the batteries every six months. And (yet) we only haul them out about once a year (for elections).”

According to their estimates, the state will have spent close to $100M of taxpayer money by the the time the dust settles. This is on average an increase of over 150% percent per voter across the board. For certain sparsely populated counties, it is close to an order of magnitude higher. Here is one IT deployment aspiring MBA students will not be reading about in their case-studies on cost cutting.

cemp

Making sense of identity management statistics

There is lies, statistics and identity management figures.

Are there a quarter billion OpenIDs? That would be the conclusion suggested by an announcement from OpenID website two months ago. How many of those users have actually used the OpenID protocol even once when authenticating anywhere? For that matter what percent even know what an OpenID is? This has been a major problem with any identity system that spans multiple sites. Users at this point have been trained to lower their expectations, and come to terms with islands of disconnected identity: each username/password works on one website only. Any system where users can authenticate to more than one relying party is confronted with the challenge of explaining this to users. (For example: “If you have a Hotmail or Messenger account, then you have a .Net Passport.”)

Does having 50% of desktops with Cardspace bits represent a tipping point for the technology to magically take off? By this logic, passwords ought to have been about as archaic as the vinyl record because nearly 100% of desktops have supported TLS client authentication and smart-cards since 2000. Even if we disregard Firefox and PKCS11 based interface and focus on IE running on Windows only, that is over 80% of all consumer PCs. Why isn’t everyone authenticating with digital certificates as the PKI vendors have  prophesied for the past decade?

cemp