Goldman Sachs theft and value of IP

The arrest of a disgruntled programmer trying to walk away with code for automated trading at Goldman Sachs raises questions about the value of intellectual property and challenges in protecting it.

First Goldman Sachs got very lucky in this case because the attempted theft was a case of amateur hour gone awry. The programmer may have been motivated and even knowledgable on quantitative modelling but clearly he was no security expert. The choice of exfiltration tactic, attempting to upload source code to a Germany, could have been easily detected by monitoring at the network perimeter or even internal machines. No doubt vendors specializing in the latest brand of snake oil, data-leak prevention or DLP, will capitalize on this opportunity for free advertising. But DLP is a case of we-catch-the-incompetent-ones. It is not possible to look at a stream of bits leaving the company network and decide if they correspond to intellectual property or harmless personal browsing. Techniques such as steganography make it possible to hide messages inside other, innocuous seeming messages that provide cover.

The second point is more disturbing: what was the corrupt insider  planning to do with the source code? How would he capitalize on the IP theft? Is he planning to set up his own trading system? Or is he planning to sell the code to another firm?

First option seems very likely. The latest trend in automated trading systems is high-frequency trading. Decision time between discovering market prices and placing trade order is on the order milliseconds here. In fact the servers are often  co-located near the exchanges themselves in order to reduce latency from order placement to execution. While the trades each earn small amount of revenue, but the ability to repeat this thousands of times for each market inefficiency allows quant hedge-funds to generate steady revenues. What all this means for potential disgrunted employees: it would be almost impossible for one individual working out of a basement or a bunch of guys sitting around Bloomberg terminals to capitalize on knowledge of the models. Even if they could predict the exact positions the model would take, the chances of front-running it are slim to none. Even given same speed, without massive capital to spread between thousands of trades, it simply would not scale enough to present a threat.

Since the speeds here are too high for human reaction times, the next option is to selling the software to another company with an existing system for low-latency trading in place. This is where a different problem emerges: no respectable company would touch stolen IP. Especially not one with deep pockets and an already viable line of business. The potential liability, both in lost revenue from the likely fines and direct personal culpability of senior ranking executives would all but guarantee that no serious player will take the risk. (Granted the case of Bernie “Made-off” Madoff provides evidence that highly dishonest operations exist in this space.)

Most likely option for monetizing such stolen IP then is a combination of individual risk and plausible deniability for a major competitor. The aspiring crook pretends that he/she came up with the trading strategy on his/her own (or perhaps the inverse strategy, since front-running is going to be a challenge, they can instead attempt to take the exact opposite positions.) The new employer is pleasantly surprised that the strategy is generating handsome returns, and appropriately rewards the brilliant quant, while HR departments pats themselves on the back for a great hiring decision. This is a case where the new employer may not be motivated to ask questions about the unexpected success.

One final aspect is that even in the absence of any reasonable way to monetize the stolen software, Goldman Sachs would be wise to give up on that particular model. The possibility itself that others may have studied the model and derived their own conclusions from it is enough to cast doubts on its future effectiveness.

cemp

Electrons are electrons: price discrimination and phone accessories

Observation from a recent involuntary 8-hour layover at San Francisco airport, complements of incompetent United Airlines stranding half the passengers on a flight from Sydney after the plane was delayed.

This blogger had a HTC G1 out of juice and no charger. A quick stop at the local gadgets shop was necessary to find a way to power the device again. The iGo units are ubiquitious at airports and with a flexible arrangment of power unit and swappable tips, promise to power just about any device. Tips are sold seperately and this is where a bizarre pricing scheme enters into the picture: the tips for the Motorola Razr were priced $2 less than the tips for T-Mobile/Google G1. They are the exact identical form factor: mini-USB. Even if the G1 draws more current, that would be handled by the iGo power adapter which already has enough smarts to handle varying demand from an array of different models. A USB cable is a USB cable.

Presumably this was a case of price discrimination: since the G1 is a more expensive smart-phone, owners are assumed willing to pay more for accessories as well, even when they are virtually identical to accessories for a more basic phones. That may work in economical terms but much to the manufacturer’s dismay, electrons do not care if they are being delivered from a “premium” cable or basic cable. Mobile phone manufacturers are notorious for trying to create various lock-in effects, for example by restricting which chargers can power a particular phone in an attempt to create artifical differentiation between otherwise identical units. But paying more for the same copper connections does not make the current magically more capable of delivering electricity. (This is the same problem that vendors of expensive pointless HDMI cable face, with an error-corrected digital signal the quality of the cable is hard to compete on.)

CP

Follow-up on clueless CAs

My friend and former colleague Ryan provided some helpful comments on the previous post regarding MD5-collision attack against incompetent CAs.  (Ryan has also written an informative post about the attack on his blog.) Based on his pointers, here are some corrections and observations:

  • The sparse appearance of the Vista trusted root store (compared to the 100+ roots in Windows XP) is largely an illusion. Both operating systems have a capability to update list of supported roots on demand from Windows Update. In XP most of the roots are pre-installed while Vista depends on this dynamic fetch process– which can be done in real-time when validating a new certificate to a greater extent.
  • If a certificate authority does not show up in the root store does not mean that it is not trusted. There is always the possibility that consulting Windows update when attempting to build a certificate chain will lead to one or more new roots getting installed.
  • A nasty surprise follows as corollary: removing a root from the trusted-roots node has no effect. It has to be explicitly placed in the untrusted roots store or the silent update from WU has to be disabled– the latter not being an advisable solution. Here is an extensive article about theproblem of dynamic installation of root certificates.
  • There does not appear to be an official way to download a list of all trusted roots valid at a given point in time, although a knowledge base article from January ’08 documents the organizations who are members of the root certificate program. (Each company may have multiple roots and may introduce new ones over time via distribution from WU, there is no 1:1 correspondance. There is also no documentation of the fingerprints or other unique identifiers for the outstanding roots.)
  • Microsoft requires the WebTrust for CAs certification standard for all CAs in the root program. WebTrust also has a series of requirements for extended validation certificates, which include the use of a stronger hash function such as SHA1 for issuance. (Not that it matters: websites using EV certificate are still vulnerable, as long as the code assign them identical trust as plain vanilla certificates. The green address bar and other window-dressing is intended for users’ eyes only; under the hood the code responsible for deciding to disclose data does not care about the distinciton.)

cemp

MD5, clueless certificate authorities and PKI trust crisis

One of the more interesting attacks of 2008 came on the second-to-last day of  the year, when a group of researchers announced they had successfully forged a bogus CA certificate. Depending on perspective, it is either a novel demonstration or an obvious next step on MD5 collisions. Since the original announcement in 2004 of the MD5 collisions, there has been steady progress on crafting meaningful messages with identical hashes. 

  • Random collisions for messages with no structure.
  • Collisions with shared prefix, which is enough to come up with 2 different X509 certificates featuring different keys and identical hash.
  • This attack was later extended to allow chosen, different prefixes on each message, as described in a Eurocrypt paper.

These birthday collisions are dangerous to the extent that an attacker can get the target to sign a message of their choice– the hope being that while the victim thinks they are signing innocuous message #1, they are also signing message #2 with same hash but worse implications. In general few applications afford that opportunity, because acting as a signing-oracle is the cryptographic equivalent to a bureaucrat rubber-stamping everything presented on his/her desk.

That brings us to the other problem: using MD5 was not the only screw up from the handful of CAs implicated in the vulnerability. The certificate contents signed were entirely predictable or controlled by the adversary. In particular, the serial ID which could have been easily made into a unique, deterministic and completely unpredictable value (by using the encryption of an incrementing counter) was implemented as a simple counter, bumped up by one each time a new certificate was issued. Since the target CAs received particularly low-levels of customer traffic, the rate of counter increase could be estimated– although it took several attempts to get it exactly correct.

In well designed systems, it rarely takes a single failure to cause a catastrophic breakdown in security. Aside from clueless CAs using MD5 and signing whatever came their way, credit for the epic failure in PKI also goes to SSL client implementations which place equal value in all certificate authorities. A certificate is valid as long as it chains up to any trusted certificate authority– and there are about 100 of them in the Windows XP root store alone. Verisign USA, which issues the majority of SSL certificates including EV certs receives the exact same treatment as RapidSSL and FreeSSL– the latter two being examples of mismanaged CAs implicated in the attack. (Strangely enough, Verisign Japan and one CA affiliated with Thawte, another large issuer, were likewise flagged as issuing MD5 certificates.) Windows will cache previously validated certificates, to save time on future path validation efforts. But there is no logic to notice discrepancies and ask why Bank of America, which used to have a valid, unexpired GTE certificate has suddenly switched over to using an obscure CA based in Esthonia.

From a business perspective, it is not fair to blame MSFT for the proliferation of inept CAs. The company is backed into a corner: who are they to reject RapidSSL or any of the other dozens of dubious garage companies that will issue any certificate to anyone? Not when CA business model is getting paid for those certificates. If standards were raised to exclude some, the companies will cry tort interference. (Remarkably the Vista root store appears to have been cleaned up a bit.)  There are minimum standards around certificate practices, but historically the  whole concept of certifying the certifiers has been window dressing. Verisign issues 2 bogus Microsoft certificates in 2002. It was partly in recognition of this fact that “extended validation” certificates were introduced, creating a lucrative business opportunity to issue far more expensive certificates with supposedly real due diligence this time. This time, not all CAs were invited to that party.

As an aside, EV would have made no difference here contrary to the MSRC wishful thinking on mitigating factors. It is purely a visual indicator: cookies and cross-domain access are still allowed to EV-protected content from a vanilla SSL protected page in the same origin, allowing man-in-the-middle attacks to succeed. For example the attacker could allow the normal login page to be displayed to collect credentials, complete with the green address bar to inspire warm, fuzzy feelings with users. (Actually usability research says that users do not pay any attention to the EV indicator, but let’s suspend disbelief for a moment.) Only when the user enters a password and submits their credentials to the website, that connection is intercepted by the attacker who substitutes a bogus, non-EV certificate which still passes the hostname check with flying colors.

cemp

Identity as externality: Trustbearer, CAC, eID

TrustBearer has become the first public demonstration of an idea this blogger first described in a ThinkWeek paper in 2006: identity management systems create positive externalities. Once built for a purpose, they are often easily extended, adopted or co-opted for completely different objectives. This pattern predates the Web, PKI and even the development of modern computing systems. The classic example is the social security number. Originally introduced by the FDR’s New Deal-era Social Security Administration for the purpose of administering benefits, it has become the de facto identifier for everything from credit rating agencies to some badly designed online banking websites; Fidelity originally used SSN as “username” but later changed the system to allow for choosing nicknames. Drivers licenses were introduced to control who can drive vehicles on public roads. When laws introduced minimum drinking age and imposed penalties for serving to minors, bars found it the natural choice to decide who gets to order drinks. (A bartender in Seattle once declined to server this blogger due to an expired driver’s license.)

Not all of these extensions are necessarily good ideas. In particular the re-purposing of the social security number from a simple identifier into a credential– something that proves identity, never intended in the original design– created  the current identity theft mess. In another example, RFID tags are a primitive identity management system designed for tracking inventory; the tag identifies the object it is attached. But when the tags are not deactivated after they are sold to consumers, they can be repurposed for tracking. Each tag emits a constant identifier that can be scanned by anyone with the appropriate transmitter and receiver set up, allowing tracking of individuals in physical space.

Occasionally unofficial extensions to an identity system provides unexpected benefits. Typically there is a very large upfront investment in deploying a system, driven by a well-defined objective. But once the system is built, adding one more person who can use it, or one more website which uses that system for authentication has a small marginal cost. Take for example the Common Access Card or CAC, soon to be replaced by the PIV. These are both PKI systems managed by the Department of Defense, for the purpose of controlling access to systems with national security implications. But once the PKI deployment is operational, individuals have been issued their cards and smart-card readers, they can be used for purposes completely unrelated to defense sector. Case in point: TrustBearer’s OpenID service accepts CAC/PIV cards for authentication to any OpenID enabled relying site. DoD certainly did not design the system for employees to check their personal email accounts or write blog comments in spare time. But given that the smart-cards were already out there in the hands of users, it was a no-brainer for TrustBearer to accept these credentials for strong authentication. Any other website could have done the same: called “SSL client authentication,” the underlying functionality has been supported by web browsers and web servers in some fashion since the late 1990s. The user interface may be clunky because it is rarely seen outside the enterprise context, but all it takes is tweaking some settings in IIS or Apache. The Department of Defense created a positive externality for all websites.

Design matters of course: some technologies are far more amenable to being re-purposed this way. For example, Kerberos is inherently a closed system: adding another relying party requires coordinating with the people in charge. Public-key infrastructure is open by design: once a digital certificate is issued, people can use it to authenticate anywhere. There are still gotchas: revocation checking imposes costs on the identity provider (adding another relying party is not a free lunch when it is hammering the system with revocation checks) or it may not work at all for an entity “outside” the official scope. Some new protocols such as OCSP stapling address that by making freshness proofs portable. More important is the question of acceptable use policy. Just because the cryptography works out does not mean that the official owner of the identity system will approve the creative re-purposing.

That brings us to the European eID deployments. These are national ID systems, with the cards containing PKI credentials. Here is one case where a PKI based system funded by tax-payer money is built with the express intent that anyone can use it for authentication to their service. (This is what governments do after all– they generate externalities, much to the chagrin of libertarians.) Not surprisingly eID cards are also accepted by TrustBearer– specifically Belgian eID. This is an even greater externality because there are bound to be many more of them in existence even today, and this will only improve over time as other EU governments make progress on their deployment. On the other hand, the precedent for using eID online is scarce and chances are most users lack the required card-readers and drivers, while the CAC/PIV users already use their cards regularly in a professional context.

cemp

Six-and-a-half degrees of seperation

An interesting paper out of MSFT research lends new support to the six-degrees of separation meme.

Quick recap: the idea that on average every individual can be linked to any other by a chain of no more than 5 acquintances in between dates to the work of Stanley Milgram at Stanford in the 1970s. Milgram used an old fashioned communication network: the postal system aka “snail mail.” Subjects in the experiment were asked to deliver letters to individuals they knew only by name and geographic location. Surfer Bob in San Diego might be asked to deliver a letter to investment banker Alice in New York, by forwarding the letter to somebody that he suspects is closer to Alice.  On average letters were passed on six times along the way to their final destination, hence the six-degrees concept which later became a Tony award-winning play on Broadway, later jumping to the big screen with Will Smith before achieving mainstream status as an online game featuring Kevin Bacon. Early part of this decade witnessed a surge of interest in studying so-called “small world graphs” which resulted in the publication of several books that poplarized the concept.

Six-degrees took a major hit in credibility when another researcher discovered in 2006 that 95% of the letters in the Milgram experiment had never reached their intended destination. One possible conclusion was that the social network was in fact, not the connected, small-diameter graph everyone imagined it to be. If 95% of the nodes had no routes between them that would paint a picture of a fragmented, disjoint society reminiscent of high school, broken up into cliques and tribes. Or it could have been that the subjects did not follow through, because sending letters takes up non-trivial amount of time.

Fast forward three decades and instant messaging has drastically lowered the overhead for communication. So the MSR group– which includes the designer of infamous Clippy— looked at the social network formed by users of MSN/Live Messenger. Specifically they studied the social graph implied by 30 billion messages sent among 240 million users in June 2006. If two people exchange messages, they are assumed to know each other. (There are situations when that is not true: for example there are spammers sending IM to random people. Also users may have more than on identity, so individuals do not map uniquely to nodes– this can distort paths because user Alice may be appears as contact on Bob’s personal IM account but not work IM account.) This is a much larger data set than Milgram’s original sample of <200 and crunching that would not have been possible without massive computing power. Finding the shortest-path between two people in a graph is very expensive and requires looking at all connections. There are some optimizations for doing this on all pairs of individuals but it is still a very expensive proposition on such a large dataset.

The result came close to vindicating the original idea: average distance between two IM users was right around 6.5 and 78% of users were seperated by less than seven links. In fact the slightly higher number would be expected because the instant messaging graph has a subset of all real world connections. First not everyone uses the Internet; the digital divide remains very much alive in the US. This has the effect of removing entire nodes from the graph; nodes that could have provided shorter conncetions between people. Similarly not all Internet users have instant messaging. Even two heavy Internet addicts may use different IM networks (one on AIM, the other on GMail for example) because interoperability is still not the norm in this space. The combined effect of these biases is to remove edges from the graph and “inflate” the distances.

cemp

New York Times badly confused on identity management

Goodbye Passwords is that rare misstep form the otherwise consistently solid Digital Domain section in the Sunday NYT: confused, misinformed and way off base. Among the several muddled arguments, four of them stand out:

1. Equating OpenID to passwords.

“OpenID offers, at best, a little convenience, and ignores the security vulnerability inherent in the process of typing a password into someone else’s Web site.”

Minor factual error: actually the password is not being typed into a random website. It is supposed to be provided only to the website where the identity was originally created, not the website where it is being used. But the general difficulty of determining whether one indeed starting at the authentic site instead of a fraudulent replace– especially when the user has been sent there by the “someone else’s Web site” in question leads to the standard critique of OpenID as increasing phishing risks.

Major factual error: OpenID is a federation standard, not a new user authentication approach. It does not mandate passwords or any other scheme for verifying identity. Open ID 2.0 specification is loud and clear on this point:

“Methods of identifying authorized end users and obtaining approval to return an OpenID Authentication assertion are beyond the scope of this specification.”

That means the identity provider can choose to use good old-fashioned passwords, smart-cards, biometrics or experimental approaches such as reading tea-leaves to authenticate the user; OpenID is silent on this. In fact one of the more hyped extensions to the protocol, added at the urging of MSFT which has been desperately trying to promote CardSpace, is a way for signaling to websites that the user authenticated with credentials resistant to phishing— Infocards in the original vision that carved out this niche case, but also more generally strong authentication mechanisms such as PKI capable smart-cards.

2. Narrow definition of single sign-on:

OpenID promotes “Single Sign-On”: with it, logging on to one OpenID Web site with one password will grant entrance during that session to all Web sites that accept OpenID credentials.

In the most general sense, single sign-on refers to one identity being valid for accessing multiple systems. This is in contrast to the current state of affairs on the web: most websites have their own notions of user identities, requiring users to create a new account. Each account is valid at exactly one website and not recognized anywhere else. Single sign-on (“federation” using the fashionable term) is about merging these disconnected islands of identity such that the scope of an identity can extend beyond that one site.

Quick peek at the Wikipedia entry would have hinted that SSO is not tied to passwords. So it comes as surprise that a Microsoft architect is quoted as criticizing SSO. Cardspace is an instance of single sign-on: the vision calls for one identity held by the user’s machine to be usable for logging into any number of websites. Inside the enterprise, Active Directory is single sign-on because it allows the same credentials to be used for accessing everything from logging into a workstation with the three-finger salute to accessing email or HR systems.

3. Misconception that “information card” is a generic term-of-art as it relates to identity management. Information card, or infocard to use the original name for the technology before it was rebranded into CardSpace, is a particular proposal that defines specific formats and protocols for identity management. Writing about “the information cards” makes about as much sense as writing about “the Facebooks” and “the Googles.” Each is a specific incarnation of a general concept: a social networking site, a search engine and an identity management protocol.

4. No hint of the history of strong authentication or alternatives. A reader may walk away from this article with the impression no realistic alternatives to passwords existed until Cardspace magically burst on the scene. Basic fact checking would have unearthed some not entirely obscure facts: there is a concept of digital certificates dating back to the 1970s, leveraging the same brew of “hard to break cryptography” whose virtues are extolled in the article. Since late 1990s, digital certificates have been standardized by X509, a stable and widely implemented supported format. It would be a small jump from there to realize that the SSL protocol universally used for securing communications online has provisions for users to verify their identity with digital certificates and that many large organizations, including the United States Department of Defense have been depending on this capability for years.

This is not to say that there are not good points in the article. OpenID is a major distraction and duplication of effort precisely because it is a mediocre reinvention of the wheel, ignoring all the investments made towards deploying PKI on the web compliments of SSL and muddying the waters one more time just when there was a fighting chance that the industry might converge on a standard (SAML, far from perfect as it may be) as the underlying format for identity assertions. But it is a non-sequitur to argue that OpenID is doomed because of its dependence on passwords and inherent problems with single sign-on.

cemp

BlackHat: making the news while reporting it

DefCon attendees always knew that using the wireless network at the conference is living on the dangerous side– even on the rare occasions a few packets managed to route their way across the congested airwaves with thousands competing for the scarce bandwidth. (This blogger has been depending on his Novatel CDMA modem compliments of Sprint to continue writing.) If there is a real-life incarnation of the proverbial “untrusted network” this is it, and The Wall of Sheep has been the favored tradition for publicly embarassing those using weak protocols that transmit credentials in the clear.

This year the tradition expanded to Blackhat, putting attendees– a much different crowd than DefCon, it goes without saying– on notice that their name could be next on the hall of shame.

Journalists had a better deal: they got their own wired, private network in the press room, free from the shenanigans of creative researchers.

It did not work out. As reported by CNet, French journalists decided to step up to plate and impress their colleagues with their “l33t credentials.” Exact details are unclear but it appears that they managed to take control over the router and capture traffic from other journalists. For anyone not using VPN, that included the stories their they were filing. So much for good sportmanship– why bother attending the conference sessions or interviewing speakers when you can “rephrase” your colleagues’ dispatches instead? The French crew were so proud of their achievements that they wanted to get the spoils displayed on the Wall of Sheep. Conference organizers were not impressed by what they viewed as illegal wiretapping and interception. Neither were fellow members of press, when they were briefed on the incident. The proto-hackers were booted off the conference, which was not enough to appease the irate journalists. The reaction reportedly included at least one person from ZDNet going through the roof.

cemp

From 0wning DNS to 0wning SSL (2/2)

But SSL does have an Achilees heel: its trust model is anchored on the digital certificate used by the web server: the only proof that the website you are communicating with Bank of America (as opposed to an impostor in Estonia) is the fact that they have a digital certificate issued by Verisign claiming that this website indeed is http://www.bankofamerica.com.

The fragility of this model has been pointed out before. Verisign is not the only recognized certification authority; out of the box Windows ships with close to 100 CAs, all of them equivalent for trust purposes. Any one of them incorrectly issuing the Bank of America certificate to somebody else is enough to ruin any guarantees provided by the cryptography– it does no good to secure your traffic, when the person at the end of that encrypted channel is the bad guy. (Perhaps the biggest CA goof was Verisign issuing Microsoft code-signing certificate to impostors in 2001. The implications were much worse than for SSL certificates, but revocation has addressed the fall-out for the most part.) While MITM attacks against SSL due to incompetent CA practices have always been possible, the challenge of playing that messenger in between so far made this a low-likelihood attack vector. Owning DNS changes that.

More importantly– and this is Kaminsky’s main point regarding SSL– the certification process itself uses DNS. According to this version of the story, when the proud new owner of the domain http://www.acme.net wants a digital certificate, the CA consults DNS records to verify ownership. They might even ask the user to insert some DNS records or add a particular page to the website, as additional proof. All of these checks are trivially subverted if DNS is corrupt because all of them will be routed to servers controlled by the attacker. This means that while the existing Bank Of America certificate is safe and sound, the enterprising criminal will:

  1. Choose a moderaly incompetent CA
  2. Subvert DNS to confuse name resolution for that CA
  3. Pass the domain ownership checks made by the CA
  4. Obtain a new valid certificate in the name of Bank of America
  5. Subvert DNS resoution for an ISP
  6. MITM all of the users at that ISP by using the perfectly valid certificate from step #4

That, at least is the picture painted in the presentation. The critical details are certification steps used– not just by Verisign, Geotrust and other major CAs but every single one of the dozens of certification authorities recognized by IE and Firefox. Extended validation does not help for two reasons: on the usability front, users pay no attention to all the fancy eye-candy browsers waste on displaying EV status, as demonstrated nicely by The emperor’s new security indicators.. On the the implementation level, the browser grants exactly same privilege to regular certificates; embedded content for example can still be subverted using a vanilla cert while keeping the main page over EV.

If this attack does indeed work– and it is impossible to determine without consulting the certification practices for CAs– it shows a circularity in the security model. SSL/TLS are designed to survive exactly the type of mayhem created by DNS hijacking. It does not matter whether traffic is routed to the right website or the wrong one. When the protocol is implemented correctly and the certificate checks out, the user is supposed to be guaranteed that they are dealing with the legitimate website. (That is not much of a guarantee: if the certificate has errors, the protocol will detect it but until recent web browsers used to respond by displaying a cryptic warning that users simply ignored. Even when the certificate is validated correctly, that only proves the identity is what is stated in the URL– which may not be at all the same one that is in the user’s mental picture, to the delight of phishing syndicates everywhere.) Weak certification practices destroy even this glimmer of hope by placing critical faith in DNS to bootstrap a protocol that was purportedly designed to survive complete breakdown of all naming and routing infrastructure.

cemp