Friendly spam: account hijacking and unintended consequences

In the past week, this blogger received links from two friends hawking shady pharmaceutical products: one was sent from a GMail account, and the other directly scrawled on the Facebook wall. This was odd, to say the least. Both friends remain gainfully employed, and unlikely to dabble in direct marketing on the sides: one is at MSFT, and the other works in financial services in Manhattan. Instead they had become victims of an account takeover, perhaps falling for a phishing scam, maybe logging into their accounts from a public computer infected by malware, or perhaps in the worst case scenario one of their personal machines had been 0wned.

So far, nothing new: in modern society, phishing attacks and large-scale machine compromise, compliments of Adobe, Sun/Oracle and MSFT are par for the course. What is unusual is the way the attackers are trying to leverage access: sending spam to other email address on the contact list. All things considered, this is a very mild outcome. A couple of factors may be at work:

  • Spam is economically viable. So much that attackers do not bother with trying to extract more value from compromised accounts. The revenue opportunity in spam has been well-studied in the security literature. The novel twist here is that the message is coming from a friend, and may have even higher click-through rates. (Keeping in mind that spamming is a very noisy activity. Eventually one of the friends on that contact list is bound to reply and inform the victim that their account has been 0wned.)
  • There is a surplus of compromised accounts out there, so much that attackers do not have the time to manually sort through each one and identify interesting ones. Presumably the personal email account of a financial analyst is worth more than that of an average Hotmail user. Even though it is not their work email, there may still be connections, interesting messages or stepping stones to other accounts. Using that account for indiscriminate spam seems inefficient, a waste of opportunity.
  • Attackers have not been able to automate the classification of each account as high/low value target. If so this is only a temporary roadblock. Given the profile information from an account (very likely includes the real name) it would be relatively easy for an individual to run a Google search for that person. Facebook accounts makes this easier by identifying networks/groups/past employers. Even running simple keyword searches in mail eg for names of banks, phrases appearing in legal briefings, could be used as the basis of heuristics to locate accounts with useful information.

Finally the proliferation of spam from friendly channels could be an encouraging sign that spam filters have gotten very good– to the point that attackers find it necessary to take over legitimate accounts and exploit existing trust relationships to their contacts as more reliable delivery mechanism. In that case the war on spam would have the highly ironic side-effect of increasing the pressure on existing user accounts.

CP

Imperfect censorship: making sense of web blocking statistics

Compare the availability of YouTube in 2 different countries around same time-frame, during the past year:

(These graphs are from the government transparency website, which also contains information about legal subpoenas for information.)

Both countries had been blocking YouTube, among other Google services but the charts appear to indicate that the censorship in Turkey has been less than watertight. There are noticeable dips at the beginning of April and  June, with occasional spikes that may either be a temporary failure in the blocking or perhaps an organic spike in volume.  The blocks appear to be lifted in November, and traffic once again recovers. What is unusual is that the normalized volume never flatlines, does not go down to zero or even hover around the single digit percentages.

Contrast that with Iran, where the percentages never climb above a few percent and hover below 1% after what appears to be a tweak to the censorship implementation around August.

CP

Case study on the perils of identity federation

This forceful critique (to put it mildly) of OpenID from a website/business owner perspective highlights one of the main leaps of faiths involved in federation: taking a dependency on a third party for the well-being of your own business.

There is a lot going on in the debacle described in the original post. Some of them could be attributed to “implementation issues,” the vague catch-all category that is the equivalent of “pilot error” we fall back on to explain away incidents without attributing a systematic cause: JanRain randomly changing APIs without proper communication, Google changing the  identifier returned, inconsistency between user profiles returned by different OpenID providers etc. These are not supposed to happen– better change tracking could have prevented some of the bone-headed mistakes involved. Instability of the OpenID standard and general lack of interoperability among implementations is an unfortunate outcome of the highly politicized standards process that results from reluctant bringing together avowed enemies to the negotiating table. (Inexplicably the US government has decided to throw its weight behind this already hobbled standard, by empowering the Nationals Institute of Health to work on a pilot program for federal adoption.) But again this is business as usual in trying to forge consensus for Internet standards, and not intrinsic to the problem of OpenID in particular or interoperability in general.

At the same time there are deeper issues at play, and these are inimical to any identity federation scheme . To quote the metaphor used by the original author:

[…] of all the failure points in your business – you really don’t want the door to be locked while you stand behind the counter waiting for business. No, let me rephrase that: you don’t want the door jammed shut, completely unopenable while your customers wait outside – irate that you won’t let them in.

Put simply, when users login to your website using a third-party identity provider (“IDP”) your business is at the mercy of that provider. If they experience a service outage, users can not login to your website either. If they decide to experiment with brand new user interface that confuses half the users, your website loses traffic.

Some of the risks can be mitigated contractually. For example the IDP could commit to a particular service level agreement, saying they have an expected uptime of 99.99%. But no IDP in existence is willing to shoulder the burden of full liability for losses incurred at relying party sites. Your website can make a compelling case that the inability to authenticate new users for an hour has resulted in loss of a thousand dollars, going by historic traffic patterns. The most you are likely to get out of the IDP are profuse, heartfelt apologies and at best a refund for that month. The incentives are highly asymmetric.

One could argue that specialization and economies of scale will compensate for this: JanRain is presumably handling authentication for thousands of web sites. So they are in a position to invest in very high-reliability infrastructure and maintain strong security posture. In principle then they are less likely to experience an outage (compared to what each relying party is capable of) less likely to get breached in an embarrassing manner as Gawker recently managed to, and more likely to respond to security incidents quickly in the worst case scenario. On the other hand, as probabilities for catastrophic failure decrease, the damage potential from that failure is going way up. An outage or breach at JanRain impacts not just the author of that blog post, but every other business using the OpenID interop service. More importantly, this is not a linear function of number of users: scale attracts scrutiny, both from white hat researchers and black-hats looking to capitalize on a lucrative target.

The above scenario only considered unintentional outages. What about cases where service is withheld on purpose? Presumably the IDP is getting paid by the site for their service. What happens when it is time to renew the contract? What if negotiations with the IDP go south and they decide to hold your users “hostage,” by refusing to authenticate them to any RP except yours until you agree to the higher price? If users are only known by their external identity, it is going to be very difficult to reestablish the link. The article quoted above describes the escape hatch required: collecting email address from users, so they can be authenticated independently, presumably by verifying their email. Of course that obviates one of the arguments for OpenID, namely individual websites no longer have to worry about the complexity and cost of operating their own authentication system. It turns out this is what the original post concluded, changing the site to nudge new users to their in-house authentication system instead of promoting OpenID.

CP

Temporarily using a Nexus S in Istanbul

Some pitfalls for the unwary, before popping in a new SIM:

  • Switching SIMs will remove passwords from saved accounts and break existing sync. This is a general property of Android and perhaps someone can explain the reason for this “feature.” Conspiracy-minded critics are likely to cry “carrier-humping surrender monkeys!” again. SIM is the instrument of customer lock-in for carriers; why create one more hurdle for switching providers, even when the switch is temporary? Replacing the original SIM does not recreate the lost credentials. Granted this is not irreversible, account names are still persisted and one can retype passwords– although it can be quite frustrating to enter symbols and punctuation marks on the inane virtual keyboard. Let’s not even get started on the difficulty of obtaining access-codes for accounts set up with new 2-step verification feature. It is not clear what threat this is defending against; merely removing the SIM without replacing it does not have this effect. Only inserting a new SIM appears to trigger the behavior, so it is useless in theft scenarios where the adversary removes the SIM to prevent remote wipe instructions. Incidentally it would be a real security feature if credentials were stored on the SIM card and never exported, with an applet on the SIM responsible for authentication. After all the SIM presents the only ubiquitous secure element found in every GSM phone. Carrier lock-in effects persists but at least there is a redeeming virtue in improved protection for credentials. Unfortunately contents of the SIM are tightly controlled by carriers and uploading your own Javacard applet there for other useful functionality has been a non-starter as far as business plans go. This is a major squandered opportunity for improving authentication across the board.
  • Configure the OS to not lock the SIM card. In the US most SIM cards do not require a PIN. At least in Turkey they appear to be; all the prepaid Turkcell cards I have seen had both the regular PIN and PIN2 for restricting call numbers. This adds one more step to the phone unlock process, on top of the pattern or existing passcode. A better design would have been for the operating system to realize that there is already an existing lock mechanism for the device, and cache the PIN automatically. (That said the screen locking is easier to by pass, as it is implemented in software; even the smudge patterns left on the screen have been shown vulnerable recently. By comparison the tamper-resistant SIM enforces its own lock out mechanism against guessing attempts.)
  • Mysteriously navigation does not work. Google Maps itself works like a charm– at least for now, Turkey does have a track record of blocking/unblocking Google services at seemingly random intervals. Also not surprisingly, GPS is very accurate and turn-by-turn directions are correct. But the device does not switch into navigation mode, hanging on “checking if navigation is available.” Fail.

CP

Stuxnet and collateral damage

To update von Clausewitz’s maxim for contemporary times: “Malware is the continuation of politics by other means.” This is one of the lessons from the ongoing Stuxnet debate: targeted computer attacks has become part and parcel of nation states’ arsenal in carrying out foreign policy objective.

There have been solid technical analysis of Stuxnet’s complex inner workings, but the debate on policy implications is starting in earnest now. One question that has been overlooked is the extent of collateral damage tolerable from carrying out this type of attack.

Stuxnet was the odd combination of both being targeted very precisely and casting an extremely wide net. The malicious payload that infected industrial controllers only kicked into gear when it detected a very specific environment, believed to represent the uranium enrichment plant operating in Iran. On the other hand, because the software development for such critical facilities typically takes place behind air-gapped networks, the worm had to be released into the wild. Its humble beginnings were no different than the self-propagating malware that wreaked havoc in the past: Code Red, Nimda, Blaster, Slammer, … Except Stuxnet was light-years ahead of its predecessors in terms of sophistication and sheer number of different vectors used to infect new targets.

Because it was after a very specific target that would not be reachable directly from the Internet, the designers threw the kitchen sink at the problem, including an exploit that allowed the malware to propagate by USB drives between machines. This meant Stuxnet would eventually reach places that vanilla malware does not, including compartmentalized networks that been assumed to be isolated from the warzone that is the Internets. Stuxnet was designed to explore every nook and cranny in that space, in pursuit of its ultimate target, the programmable logic controllers destined to spin enrichment centrifuges. Given its non-discriminatory approach to spreading, it is surprising that most of the infections remained contained in Iran, with a smaller number in Indonesia and India– countries starting with “I” apparently did not fare well. By comparison the number of infections in the US were not significant. The first question then is what other systems are “fair game” on the way to reaching an objective. Stuxnet case is complicated by the fact that the presumed target is not directly reachable. Intermediate stepping stones are required to get there, which may end up being personal computers, Internet cafes, anything that is ultimately connected to the persons of interest in some unexpected six-degrees-of-separation logic. (This brings to mind the quote from Robert H. Morris Sr: “To a first approximation, every computer in the world is connected with every other computer.”) Worse the connections are not known in advance: it is a massively parallel search, exploring every possible path along the way in hopes that one may cross paths with the actual target. Such expansive views on scope risk turning every machine in the world into collateral damage in the name of reaching the destination.

The second dimension concerns damage. On most machines it infected, Stuxnet did nothing but propagate to other targets. Again there is a similarity to the massive worm outbreaks of good old days– with the exception of Witty, most contained no malicious payload. Even if it happened to land on a computer where some unlucky engineer had been tasked with developing software for industrial controllers for an unrelated industry, the tampered product would likely have worked flawlessly for its intended environment. This is not to say that there was no cost to Stuxnet for those in its path: there is still time and productivity wasted on removing the malware from the system, both for individuals and companies.  On the other hand, economic impact for software vendors is murky. Antivirus vendors benefit from trumping up scare stories. This one fits the bill perfectly, complete with cloak-and-dagger nation state implications. Similarly it is difficult to argue that MSFT suffered great expense in addressing the vulnerabilities implicated in Stuxnet, considering their leisurely patch schedule in the presence of known 0-days.

In any case, it is misleading to focus on the designers’ intent in not harming systems– far from being a magnanimous gesture on their part, it was simply following best-practices in malware design. Noisy/buggy malware is the one that gets noticed and removed. Stealth is a survival strategy: even run-of-the-mill keystroker recorders designed to be steal credit cards in the name of petty theft strive to be very stable. Vandalizing user data, blue-screening the system or displaying in-your-face popup advertisments is the surefire way to get your malware noticed by an AV vendor. (Interesting enough Stuxnet was noticed by Kaspersky and filed away as vanilla malware a full year before its inner workings were properly understood.) The problem is that modern operating systems are incredibly complex, and it is not possible to ensure that malware lives up to its promise of zero collateral damage. When Robert Morris Jr. released the Internet worm, he intended it to propagate only, with no malicious payload and barely noticeable load on infected systems. But a slight miscalculation/bug in the logic caused it to overwhelm networks and machines. Even MSFT can not ship software updates without breaking users in some unexpected, obscure configuration– and they have much higher Q&A expertise and test matrix then organizations developing malware.

The network infrastructure has long been a battle ground, with participants of every scale from hobbyist vandals to organized crime groups and nation states, duking it out with packets. The question raised by Stuxnet is whether these frontlines will expand to includes the machines owned/used by ordinary citizens, turning them into dispensable pawns in pursuit of an elusive objective.

CP