Reputation in the sharing economy
[Full disclosure: this blogger was head of information security for Airbnb 2013-2014]
In a recently published Vice article titled “I Accidentally Uncovered a Nationwide Scam on Airbnb” a journalist goes down the rabbit-hole of tracking down instances of fraud at scale on the popular sharing economy platform. The scam hinges on misrepresentation: unsuspecting guests sign up for one listing based on the photographs, only to be informed minutes before their check-in time about an unforeseen problem with unit that precludes staying there. Instead the crooks running the scam directs the guest to an allegedly better or more spacious unit also owned by the host. As expected, this bait-and-switch does not turn out very well for the guest, who discover upon arrival that their new lodgings are less than stellar: run-down, unsanitary and in some cases outright dangerous.
First, there is no excuse for the failure to crack down on these crooks. As the headline makes clear, this is not an isolated incident. Multiple guests were tricked by the same crook in exactly the same manner. In an impressive bit of sleuthing, the Vice journalist proceeds to identify multiple listings on the website using staged pictures with the same furniture and reach out to other guests conned by the same perpetrator. (She even succeeds in digging up property records for the building where the guests are routed after their original listing mysteriously becomes unavailable, identifying the owner and his company on LinkedIn.) Airbnb is not a struggling early stage stratup. It has ample financial resources to implement basic quality assurance: every listing must be inspected in person to confirm that its online depiction does not contain materially significant misrepresentations. The funds used to fight against housing ordinances or insult public libraries in San Francisco are better off redirected to combatting fraud or compensating affected customers. Ironically the company exhibited such a high-touch approach in its early days while it was far more constrained in workforce and cash: employees would personally visit hosts around the country to take professional photographs of their listings.
Second, even if one accepts the premise that 100% prevention is not possible— point-in-time inspection does not guarantee the host will continue to maintain the same standards— there is no excuse for appalling response from customer support. One would expect that guests are fully refunded for the cost of their stay or better yet, that Airbnb customer support can locate alternative lodgings in the same location in real time once guests discover the bait-and-switch. These guests were not staying in some remote island with few options; at least some of the recurring fraud took place in large, metropolitan areas such as Chicago where the platform boasts thousands of listings to choose from. Failing all else, Airbnb can always swallow its pride and book the guest into a hotel. Instead affected guests are asked to navigate a Kafkaesque dispute resolution process to get their money back even for one night of their stay. In one case the company informs the guest that the “host”— in other words, the crooks running this large-scale fraudulent enterprise— have a right to respond before customer support can take action.
Third, the article points to troubling failures of identity verification on the platform, or at least identity misrepresentation. It is one thing for users of social networks to get by with pseudonyms and nicknames. A sharing platform premised on the idea that strangers will be invited into each others’ place of residence is the one place where verified, real-world identity is crucial for deterring misconduct. If there is a listing hosted by “Becky” and “Andrew,” customers have every reason to believe that there are individuals named Becky and Andrew involved with that listing in some capacity. The smiling faces in the picture need not necessarily be the property owners or current lease-holder living there. They could be agents of the owner helping manage the listing or even professional employees at a company that specializes in brokering short-term rentals on Airbnb. But there is every expectation that such individuals exist, along with a phone number where they can be reached— otherwise, what is the point of collecting this information? Instead as the article shows, they appear to be fictitious couples with profile pictures scraped from a stock-photography website. The deception was in plain sight: an Airbnb review from 2012 referred to the individual behind the profile by his true name, not the fabricated couple identity. While there is an argument for using shortened versions, diminutives, middle-names or Anglicized names instead of the “legal” first name printed on official government ID, participants should not be allowed to make arbitrary changes to an existing verified profile.
To be clear: identity verification can not necessarily stop bad actors from joining the platform any more than the receptionist’s perfunctory request for driver’s license stops criminals from staying at hotels. People can and do commit crimes under their true identity. One could argue that Airbnb ought to run a background check on customers and reject those with prior convictions for violent offenses. Aside from being obviously detrimental to the company bottom line and possibly even running afoul of laws against discrimination (not that violating laws has been much of a deterrent for Airbnb) such an approach is difficult to apply globally. It is only for US residents that a wealth of information can be purchased on individuals, conveniently indexed by their social security number. More to the point, there is no “precrime unit” a la The Minority Report for predicting whether an individual with an otherwise spotless record will misbehave in the future once admitted on to the platform.
Far more important is to respond swiftly and decisively once misbehavior are identified, in order to guarantee the miscreants will never be able to join the platform again under some other disguise. At the risk of sounding like the nightmarish social-credit system being imposed in China as an instrument of autocratic control, one could envision a common rating system for the sharing economy: if you are kicked out of Airbnb for defrauding guests, you are also prevented from signing up for Lyft. (Fear not, Uber will likely accept you anyway.) In this case a single perpetrator brazenly operated multiple accounts on the platform, repeatedly bait-and-switching guests over to units in the same building he owned, leaving behind an unmistakable trail of disgruntled guest reviews. Airbnb still could not connect the dots.
The problem with peer-reviews
Finally and this is the most troubling aspect, the article suggests the incentive system for reviews is not working as intended. In a functioning market, peer reviews elicit honest feedback and accurately represent the reputation of participants. The article points to several instances where guests inconvenienced by fraudulent listings were either reluctant to leave negative feedback. Even worse, there were situations when the perpetrators of the scams left scathing reviews full of fabrications for the guests, in an effort to cast doubt on the credibility of the understandably negative reviews those guests were expected to leave.
Incidentally, Airbnb did change its review system around 2014 to better incentivize both parties to provide honest feedback without worrying about what their counterparty will say. Prior to 2014, reviews were made publicly visible as soon as the guest or host provided them in the system. This created a dilemma: both sides were incentivized to wait for the other to complete their review first, so they could adjust their feedback accordingly. For example, if guests are willing to overlook minor issues with the listing, the host may be willing to forgive of some their minor transgressions. But if the guest review consisted of nitpicking about every problem with the listing (“too few coffee mugs— what is wrong with this place?”) the host will be inclined to view guest conduct through an equally harsh lens (“they did not separate the recycling— irresponsible people”) That creates an incentive for providing mostly anodyne, meaningless feedback and avoiding confrontation at all costs. After all, the side that writes a negative review first is at a distinct disadvantage. Their counterparty can write an even harsher response, not only responding to the original criticism but also piling on far more serious criticisms against the author. It also means that reviews may take longer to arrive. When neither side wants to go first, the result is a game-of-chicken between guest & host played against the review deadline.
In the new review model, feedback is hidden until both sides complete their reviews. After that point, it is revealed simultaneously. That means both sides are required to provide feedback independently, without visibility into what their counterparty wrote ahead of time. In theory this elicits more honest reviews— there is no incentive to suppress negative feedback out of a concern that the other side will modify their review in response. (There is still a 30-day deadline to make sure feedback is provided in a timely manner; otherwise either side could permanently hold the reviews hostage.) The situation is similar to the prisoner’s dilemma from game theory: imagine both guest and host having grievances about a particular stay. The optimal outcome from a reputation perspective is one where both sides suppress the negative feedback (“cooperate”) leaving positive reviews, which looks great for everyone— and Airbnb. But if one side defects and leaves a negative review featuring their grievance, the other side will look even worse. Imagine a scenario where the guests say everything was great about the listing and host, while the host claims the guests were terrible people and demands payment from Airbnb for the damage. Even if these charges were fabricated, the guests have lost much of their credibility to counter the false accusations by going on the record with a glowing review about the host. So the stable strategy is to “defect:” include negative feedback in the review, expecting that the counterparty will likewise include their own version of the same grievance.
But game theoretical outcomes are only observed in the real world when participants follow the optimal strategies expected of rational agents. Decades of behavioral economics research suggest that actual choices made by humans can deviate significantly from that ideal. The Vice article quotes guests who were reluctant to leave negative reviews about the fraudulent hosts even after their decidedly unhappy experiences. This is not surprising either. There are other considerations that go into providing feedback beyond fear of retaliation. For example there are social norms against harshly criticizing other people; recall that all reviews are visible on Airbnb. Other users can look up a prospective guest and observe that he/she has been providing 1 star reviews to all of their hosts. In the absence of such constraints, the game-theoretical conclusion would be taken to an extreme where both sides write the most negative review possible, constrained only by another social norm against making false statements.
Either way, the incentive structure for reviews clearly needs some tweaks to elicit accurate feedback.
CP
Update: Airbnb announced that the company will be manually verifying all 7 million of their listings.