The case for keyed password hashing

An idea whose time has come?

Inefficient by design

Computer science is all about efficiency: doing more with less. Getting to the same answer faster in fewer CPU cycles, using less memory or economizing on some other scarce resource such as power consumption. That philosophy makes certain designs stand out for doing exactly the opposite: protocols crafted to deliberately waste CPU cycles, take up vast amounts of memory or delay availability of some answer. Bitcoin mining gets criticized— often unfairly— as the poster-child for an arms race designed to throw more and more computing power at a “useless” problem. Yet there is a much older, more mundane and rarely questioned waste of CPU cycles: password hashing.

In the beginning: /etc/passwd

Password hashing has ancient origins when judged by the compressed timeline of computing history. Original UNIX systems stored all user passwords in a single world-readable file under /etc/passwd. That was in the day of time-sharing systems, when computers were bulky and expensive. Individuals did not have the luxury of keeping one under their desk, much less in their pocket. Instead they were given accounts on a shared machine owned and managed by the university/corporation/branch of government. When Alice and Bob are on the same machine, the operating system is responsible for making sure they stay in their lane: Alice can’t access Bob’s files or vice verse. Among other things, that means making sure Alice can not find out Bob’s password.

Putting everyone’s password in a single world-readable file obviously fall short of that goal. Instead UNIX took a different approach: place a one-way cryptographic hash of the password. Alice and Bob could still observe each other’s password hashes but the hashes alone were not useful. Logging into the system required the original, cleartext version of the password. With a cryptographic hash the only way to “invert” the function is by guessing: start with a list of likely passwords, hash each one and compare against the one stored in the passed file. The hash function was also made deliberately slow. In most cryptographic applications, one prefers their hash function to execute quickly, inefficiency is a virtue here. It slows down attackers more than it slows down the defenders. The hash function is executed only a handful of times for “honest” paths when a legitimate user is trying to login. By contrast an attacker trying to guess passwords must repeat the same operation billions of times.

Later UNIX versions changed this game by separating password hashes— sensitive stuff— from other account metadata which must remain world-readable. Hashes were moved to a different place appropriately called the “shadow” file commonly placed at /etc/shadow. With this improvement attackers must first exploit a local privilege-escalation vulnerability to read the shadow file before they can unleash their password-cracking rig on target hashes.

Atavistic designs for the 21st century

While the 1970s are a distant memory, vestiges of this design persist in the way modern websites manage customer passwords. There may not be a world-readable password file or even shadow file at risk of being exposed to users but in one crucial way the best-practice has not budged: use one-way hash function to process passwords for storage. There have been significant advances in the construction of the hash functions are designed. For example the 1990s era  PBKDF2 includes an adjustable difficulty— read: inefficiency— level, allowing the hash function to waste more time as CPUs become faster. Its modern successor bcrypt follows the same approach. Scrypt ups the along a new dimension: in addition to being inefficient in time by consuming gratuitous amount of CPU cycles, it also seeks to be inefficient in space by deliberately gobbling up memory to prevent attackers from leveraging fast but memory-constrained systems such as GPUs and ASICs. (Interestingly these same constraints also come up in the design of proof-of-work functions for blockchains where burning up some resource such as CPU cycles is the entire point.)

Aside from committing the cardinal sin of computer science— inefficiency— there is a more troubling issue with this approach: it places the burden of security on users. The security of the password depends not only on the choice of hash function and its parameters, but also on the quality of the password selected. Quality is elusive to define formally, despite no shortage of “password strength meters” on websites egging users on to max out their scale. Informally it is a measure of how unlikely that password is to be included among guesses tried by a hypothetical attacker. That of course depends on exactly the order an attacker goes about guessing. Given that the number of all possible passwords is astronomically large, no classical computer has a chance of going through each guess even when a simple hash function is used. (Quantum-computers provide a speed-up in principle with Grover’s algorithm.) Practical attacks instead leverage the weak point: human factors. Users have difficulty remembering long, random strings. Absent some additional help such as password managers to generate such random choices, they are far more likely to pick simple ones with patterns: dictionary words, keyboard sequences (“asdfgh”),  names of relatives, calendar dates. Attacks exploit this fact by ordering the sequence of guesses from simple to increasing complexity. The password “sunshine123” will be tried relatively early. Meanwhile “x3saUU5g0Y0t” may be out of reach: there is not enough compute power available to the attacker to get that far down the list of candidates in the allotted time. Upshot of this constraint: users with weak passwords are at higher risk if an attacker gets hold of password hashes.

Not surprisingly websites are busy making up arbitrary password complexity rules, badgering user with random restrictions: include mixed case, sprinkle in numerical digits, finish off with a touch of special characters but not that one. This may look like an attempt to improve security for customers. Viewed from another perspective, it is an abdication of responsibility: instead of providing the same level of security for everyone, the burden of protecting password hashes from brute-force attack is shifted to customers.

To be clear, there is a minimal level of password quality helpful to resist online attacks. This is when the attacker tries to login through standard avenues, typically by entering the password into a form. Online attacks are very easy to detect and protect against: most services will lock out accounts after a handful of incorrect guesses or require CAPTCHA to throttle attempts. Because attackers can only get in a handful of tries this way, only the most predictable choices are at risk. (By contrast an attacker armed with stolen database of password hashes is only limited by the processing power available. Not to mention this attack is silent: because there is no interaction with the website— unlike the online guessing approach— the defenders have no idea that an attack is underway or which accounts need to be locked for protection.) Yet the typical password complexity rules adopted by websites go far beyond what is required to prevent online guessing. Instead they are aimed at the worst-case scenario of an offline attack against leaked password hashes.

Side-stepping the arms race: keyed hashing

One way to relieve end-users from the responsibility of securing their own password storage— as long as passwords remain in use, considering FIDO2 is going nowhere fast— is to mix in a secret into the hashing process. That breaks an implicit but fundamental assumption in the threat model: we have been assuming attackers have access to the same hash function as the defenders use so they can run it on their own hardware millions of times. That was certainly the case for the original UNIX password hashing function “crypt.” (Side-note: ironically crypt was based on the block cipher DES which uses a secret-key to encrypt data. “Crypt” itself does not incorporate a key.) But if the hash function requires access to some secret only known  to the defender, then the attacker is out of luck:

For this approach to be effective, the “unknown” must involve a cryptographic secret and not the construction of the function. For example making some tweaks to bcrypt and hoping the attacker remains unaware of that change provides no meaningful benefit. That would be a case of the thoroughly discredited security-through-obscurity approach. As soon as the attacker gets wind of the changes— perhaps by getting access to source code in the same breach that resulted in the leak of password hashes— it would be game over.

Instead the unknown must be a proper cryptographic key that is used by the hash function itself. Luckily this is exactly what a keyed hash does. In the same way that a plain hash function transform input of arbitrary size into a fixed size output in an irreversible manner, a keyed hash does the same while incorporating a key.

KH(message, key) → short hash

HMAC is one of the better options but there are others including the older generation CMAC and newer alternatives such as Poly1305 and KMAC. All of these functions share the same underlying guarantee: without knowing the key, it is computationally difficult to produce the correct result for a new message. That holds true even when the attacker can “sample” the function on other messages. For example an attacker may have multiple accounts on the same website and observe the keyed hashes for his/her own accounts for which he/she has selected the password. Assuming our choice of keyed hash lives up to its advertised properties, that would still provide no advantage in being able to compute keyed-hashes on other candidate passwords.

Modern options for key management

The main challenge for deploying keyed hashing at scale comes down to managing the lifecycle of the key. There are three areas to address:

  1. Confidentiality is paramount. This key must be guarded well. If attackers get hold of it, it stops being a keyed hash and turns into a plain hash. (One way to mitigate that is to apply the keyed hash in series after an old-school, slow unkeyed hash. More on this in a follow up post.) That means the key can not be stored in say the same database where the password hashes are kept: otherwise any attack that could get at hashes would also divulge the key, losing out on any benefits.
  2. Availability is equally critical. Just as important as preventing attackers from getting their hands on the key is making sure defenders do not lose their copy. Otherwise users will not be able to login: there is no way to validate whether they supplied the right password without using this key to recompute the hash. That means it can not be some ephemeral secret generated on one server and never backed-up elsewhere. Failure at that single point would result in loss of the only existing copy and render all stored hashes useless for future validation.
  3. Key rotation is tricky. It is not possible to simply “rehash” all passwords with a new key whenever the service decides it is time for change. There is no way to recover the original password out of the hash, even with possession of the current key. Instead there will be an incremental/rolling migration: as each customer logs in and submits their cleartext password, it will be validated against the current version of the key and then rehashed with the next version to replace the database entry.

Of these #3 is something an inevitable implementation twist, not that different in spirit from trying to migrate from one unkeyed hash to another. But there is plenty of help available in modern cloud environments to solve the first two problems.

Until about ~10 years ago using dedicated cryptographic hardware— specifically HSM or hardware security module— was the standard answer to any key management problem with this stringent combination of confidentiality and availability. HSMs allow “grounding” secrets to physical objects in such a way that the secret can be used but not disclosed.  This creates a convenient oracle abstraction: a blackbox that can be asked to perform the keyed-hash computations on any input message of our choosing, but can not be coerced to divulge the secret involved in those computations. Unfortunately those security guarantees come with high operational overhead: purchasing hardware, setting them up  in one or more physical data centers and working with the arcane, vendor-specific procedures to synchronize key material across across multiple units while retaining some backups in case the datacenter itself goes up in flames. While that choice is still available today and arguably provides the highest level of security/control, there are more flexible options with different tradeoffs along the curve:

  • CloudHSM offerings from AWS and Google have made it easier to lease HSMs without dealing with physical hardware. These are close to the bare-metal interface of the underlying hardware but often throw-in some useful functionality such as replication and backup automatically.
  • One level of abstraction up, major cloud platforms have all introduced “Key Management Service” or KMS offerings. These are often backed by an HSM but hide the operational complexity and offer simpler APIs for cryptographic operations compared to the native PKCS#11 interface exposed by the hardware. They also take care of backup and availability, often providing extra guard-rails that can not be removed. For example AWS imposes a mandatory delay for deleting keys. Even an attacker that temporarily gains AWS root permissions can not inflict permanent damage by destroying critical keys.
  • Finally TPMs and virtualized TPMs have become common enough that they can be used often at a fraction of the cost of HMSs. TPM can provide the same blackbox interface for computing HMACs with a secret key held in hardware. TPMs do not have the same level of tamper resistance against attackers with physical access but that is often not the primary threat one is concerned with. A more serious limitation is that TPMs lack replication or backup capabilities. That means keys must be generated outside the production environment and imported into each TPM, creating weak points where keys are handled in cleartext. (Although this only has to be done once for the lifecycle of the hardware. TPM state is decoupled form the server. For example all disks can be wiped and OS reinstalled with no effect on imported keys.)

Threat-model and improving chances of detection

In all cases the threat model shifts. Pure “offline” attacks are no longer feasible even if an attacker can get hold of password hashes. (Short of a physical attack where the adversary walks into the datacenter and rips an HSM out of the rack or yanks the TPM out of the server board— hardly stealthy.) However that is not the only scenario where an attacker can access the same hash function used by the defenders to create those hashes. After getting sufficient privileges in the production environment, a threat actor can also make calls to the same HSM, KMS or TPM used by the legitimate system. That means they can still mount a guessing attack by running millions of hashes and comparing against known targets. So what did the defenders gain?

First note this attack is very much online. The attacker must interact with the gadget or remote service performing the keyed hash for each guess. It can not be run in a sealed environment controlled by the attacker. This has two useful consequences:

  • Guessing speed is capped. It does not matter how many GPUs the attacker has at their disposal. The bottleneck is the number of HMAC computations the selected blackbox can perform. (Seen in this light, TPMs have an additional virtue: they are much less powerful than both HSMs and typical cloud KMSes. But this is a consequence of their limited hardware rather than any deliberate attempt to waste cycles.)
  • Attackers risk detection. In order to invoke the keyed hash they must either establish persistence in the production environment where cryptographic hardware resides or invoke the same remote APIs on a cloud KMS service. In the first case this continued presence increases the signals available to defenders for detecting an intrusion. In some cases the cryptographic hardware itself could provide clues. For example many HSMs have an audit trail of operations. Sudden spike in number of requests could be a signal of unauthorized access. Similarly in the second case usage metrics from the cloud provider provide a robust signal of unexpected use. In fact since KMS offerings typically charge per operation, the monthly bill alone becomes evidence of an ongoing brute-force attack.

CP

AWS CloudHSM key attestations: trust but verify

(Or, scraping attestations from half-baked AWS utilities)

Verifying key provenance

Special-purpose cryptographic hardware such as HSMs are one of the best options for managing high value cryptographic secrets, such as private keys controlling blockchain assets. When significant time is spent implementing such a heavyweight solution, it is often useful to be able to demonstrate this to third-parties. For example the company may want to convince customers, auditors or even regulators that critical value key material exists only on fancy cryptographic hardware, and not on a USB drive in the CEO’s pocket or some engineer’s commodity laptop running Windows XP. This is where key attestations come in handy. An attestation is effectively a signed assertion from the hardware that a specific cryptographic key exists on that device.

At first that may not sound particularly reassuring. While that copy of the key is protected by expensive, fancy hardware, what about other copies and backups lying around on that poorly guarded USB drive? These concerns are commonly addressed by design constraints in HSMs which guarantee keys are generated on-board the hardware and can never be extracted out in the clear. This first part guarantees no copies of the key existed outside the trusted hardware boundary before it was generated, while the second part guarantees no other copies can exist after generation. This notion of being “non-extractable” means it is not possible to observe raw bits of the key, save them to a file, write them on a Post-It note, turn it into a QR code, upload it to Pastebin or any of the dozens of other creative ways ops personnel have compromised key security over the years. (To the extent backups are possible, it involves cloning the key to another unit from the same manufacturer with the same guarantees. Conveniently that creates lock-in to one particular model in the name of security— or what vendors prefer to call “customer loyalty.” 🤷🏽)

CloudHSM, take #2

Different platforms handle attestation in different ways. For example in the context of Trusted Platform Modules, the operations are standardized by the TPM2 specification. This blog post looks at AWS CloudHSM, which is based on the Marvell Nitrox HSMs, previously named Cavium. Specifically, this is the second version of Amazon hosted HSM offering. The first iteration (now deprecated) was built on Thales née Gemalto née Safenet hardware. (While the technology inside an HSM advances slowly due to FIPS certification requirements, the nameplate on the outside can change frequently with mergers & acquisitions of manufacturers.)

Attestations only make sense for asymmetric keys, since it is difficult to convey useful information about a symmetric key without actually leaking the key itself. For asymmetric cryptography, there is a natural way to uniquely identify private keys: the corresponding public-key. It is sufficient for the hardware then to output a signed statement to the effect “the private key corresponding to public key K is resident on this device with serial number #123.” When the authenticity of that statement can be verified, the purpose of attestation is served. Ideally that verification involves a chain of trust going all the way back to the hardware manufacturer who is always part of the TCB. Attestations are signed with a key unique to each particular unit. But how can one be confident that unit is supposed to come with that key? Only the manufacturer can vouch for that, typically by signing another statement to the effect “device with serial #123 has attestation-signing key A.” Accordingly every attestation can be verified given a root key associated with the hardware manufacturer.

If this sounds a lot like the hierarchical X509 certificate model, that is no coincidence. The manufacturer vouches for a specific unit of hardware it built, and that unit in turn vouches for the pedigree of a specific user-owned key. X509 certificates seem like a natural fit. But not not all attestation models historically follow the standard. For example TPM2 specification defines its own (non-ASN1) binary format for attestations. It also diverges from the X509 format, relying on a complex interactive protocol to improve privacy, by having a separate, static endorsement key (itself validated by a manufacturer issued X509 certificate, confusingly enough) and any number of attestation keys that sign the actual attestations. Luckily Marvell has hewed closely to the X509 model, with the exception of the attestations themselves where another home-brew (again, non-ASN1) binary format is introduced.

Trust & don’t bother to verify?

There is scarcely any public documentation from AWS on this proprietary format. In fact given the vast quantity of guidance on CloudHSM usage, there is surprisingly no mention of proving key provenance. There is one section on verifying the HSM itself— neither necessary nor sufficient for our objective. That step only covers verifying the X509 certificate associated with the HSM, proving at best that there is some Marvell unit lurking somewhere in the AWS cloud. But that is a long ways from proving that the particular blackbox we are interacting with, identified by a private IP address within the VPC, is one of those devices. (An obvious question is whether TLS could have solved that problem. In fact the transport protocol does use certificates to authenticate both sides of the connection but in an unexpected twist, CloudHSM requires the customer to issue that certificate to the HSM. If there was a preexisting certificate already provisioned in the HSM that chains up to a Marvell CA, it would indeed have proven the device at the other end of the connection is a real HSM.)

Neither CloudHSM documentation or the latest version of CloudHSM client SDK (V5) have much to say on obtaining attestations for a specific key generated on the HSM. There are references to attestations in certain subcommands of key_mgmt_util, specifically for key generation. For example the documentation for genRSAKeyPair states:


-attest

Runs an integrity check that verifies that the firmware on which the cluster runs has not been tampered with.

This is at best an unorthodox definition of key attestation. While missing from the V5 SDK documentation, there are also references in the “previous” V3 SDK (makes you wonder what happened to V4?) to the same optional flag being available when querying key attributes with “getAttribute” subcommand. That code path will prove useful for understanding attestations: each key is only generated once, but one can query attributes any number of times to retrieve attestations.

Focusing on the V3 SDK which is no longer available for download, one immediately run into problems with ancient dependencies and incompatibility with modern Linux distributions It is linked against OpenSSL 1.x which will prevent installation out-of-the-box on modern distributions.

But even after jumping through the necessary hoops to make it work, the result is underwhelming: while the utility claims to retrieve and verify Marvell attestations, it does not expose the attestation to the user. In effect these utilities are asserting: “Take our word for it, this key lives on the HSM.” That defeats the whole point of generating attestations, namely being able to convince third-parties that keys are being managed according to certain standards. (It also raises the question of whether Amazon itself understands the threat model of a service for which it is charging customers a pretty penny.)

Step #1: Recovering the attestation

When existing AWS utilities will not do the job, the logical next step is writing code from scratch to replicate their functionality while saving the attestation, instead of throwing it away after verification. But that requires knowledge of the undocumented APIs offered by Marvell. While CloudHSM is compliant with the standard PKCS#11 API for accessing cryptographic hardware, PKCS#11 itself does not have a concept of attestations. Whatever this Amazon utility is doing to retrieve attestations involves proprietary APIs or at least proprietary extensions to APIs such as a new object attribute which neither Marvell nor Amazon have documented publicly. (Marvell has a support portal behind authentication, which may have an SDK or header files accessible to registered customers.) 

Luckily recovering the raw attestation from the AWS utility is straightforward. An unexpected assist comes from the presence of debugging symbols, making it much easier to reverse engineer this otherwise blackbox binary. Looking at function names with the word “attest”, one stands out prominently:

[ec2-user@ip-10-9-1-139 1]$ objdump -t /opt/cloudhsm/bin/key_mgmt_util  | grep -i attest
000000000042e98b l     F .text 000000000000023b              appendAttestation
000000000040516d g     F .text 0000000000000196              verifyAttestation

We can set a break point on verifyAttestation with GDB:

(gdb) info functions verifyAttestation
All functions matching regular expression "verifyAttestation":
File Cfm3Util.c:
Uint8 verifyAttestation(Uint32, Uint8 *, Uint32);
(gdb) break verifyAttestation
Breakpoint 1 at 0x40518b: file Cfm3Util.c, line 351.
(gdb) cont
Continuing.

Next generate an RSA key pair and request an attestation with key_mgmt_util:

Command:  genRSAKeyPair -sess -m 2048 -e 65537 -l verifiable -attest
Cfm3GenerateKeyPair returned: 0x00 : HSM Return: SUCCESS
Cfm3GenerateKeyPair:    public key handle: 1835018    private key handle: 1835019

The breakpoint is hit at this point, after key generation has already completed and key handles for public/private halves returned. (This makes sense; an attestation is only available after key generation has completed successfully.)

Breakpoint 1, verifyAttestation (session_handle=16809986, response=0x1da86e0 "", response_len=952) at Cfm3Util.c:351
351 Cfm3Util.c: No such file or directory.
(gdb) bt
#0  verifyAttestation (session_handle=16809986, response=0x1da86e0 "", response_len=952) at Cfm3Util.c:351
#1  0x0000000000410604 in genRSAKeyPair (argc=10, argv=0x697a80 <vector>) at Cfm3Util.c:4555
#2  0x00000000004218f5 in CfmUtil_main (argc=10, argv=0x697a80 <vector>) at Cfm3Util.c:11360
#3  0x0000000000406c86 in main (argc=1, argv=0x7ffdc2bb67f8) at Cfm3Util.c:1039

Owing to the presence of debugging symbols, we also know which function argument contains the pointer to the attestation in memory (“response”) and its size (“response_len”.) GDB can save that memory region to file for future review:

(gdb) dump memory /tmp/sample_attestation response response+response_len

Side note before moving on to the second problem, namely making sense of the attestation: While this example showed interactive use of GDB, in practice the whole setup would be automated. GDB allows defining automatic commands to execute after a breakpoint, and also allows launching a binary with a debugging “script.” Combining these capabilities:

  • Create a debugger script to set a breakpoint on verifyAttestation. The breakpoint will have an associated command to write the memory region to file and continue execution. In that sense the breakpoint is not quite “breaking” program flow but taking a slight detour to capture memory along the way.
  • Invoke GDB to load this script before executing the AWS utility.

Step #2: Verifying the signature

Given attestations in raw binary format, next step is parsing and verify the contents, mirroring what the AWS utility is doing in the “verifyAttestation” function. Here we specifically focus on attestations returned when querying key attributes because that presents a more general scenario: key generation takes place only once, while attributes of an existing key can be queried anytime. 

By “attributes” we are referring to PKCS#11 attributes associated with a cryptographic object present on the HSM. Some examples:

  • CKA_CLASS: Type of object (symmetric key, asymmetric key…)
  • CKA_KEYTYPE: Algorithm associated with a key (eg AES, RSA, EC…)
  • CKA_PRIVATE: Does using the key require authentication?
  • CKA_EXTRACTABLE: Can the raw key material be exported out of the HSM? (PKCS#11 has an interesting rule that this attribute can only be changed from true→false, it can not go in the other direction.)
  • CKA_NEVEREXTRACTABLE: Was the CKA_EXTRACTABLE attribute ever set to true? (This is important when establishing whether an object is truly HSM-bound. Otherwise one can generate an initially extractable key, make a copy out of the HSM and later flip the attribute.)

Experiments show the exact same breakpoint for verifying attestations is triggered through this alternative code path when “-attest” flag is present:

Command:  getAttribute -o 524304 -a 512 -out /tmp/attributes -attest
Attestation Check : [PASS]
Verifying attestation for value
Attestation Check : [PASS]
Attribute size: 941, count: 27
Written to: /tmp/attributes file
Cfm3GetAttribute returned: 0x00 : HSM Return: SUCCESS

Here is an example of the text file written with all attributes for an RSA key is. Once again the attestation itself is verified and promptly discarded by the utility under normal execution. But debugger tricks described earlier help capture a copy of the original binary block returned. There is no public documentation from AWS or Marvel on the internal structure of these attestations. Until recently there was a public article on the Marvell website (no longer resolves) which linked to two python scripts that are still accessible as of this writing:

These scripts are unable to parse attestations from step #1, possibly because they are associated with a different product line or perhaps different version of the HSM firmware. But they offer important clues about the format, including the signature format: it turns out to be the last 256 bytes of the attestation, carrying a 2048-bit RSA signature. In fact one of the scripts can successfully verify the signature on a CloudHSM attestation, when given the partition certificate from the HSM:

[ec2-user@ip-10-9-1-139 clownhsm]$ python3 verify_attest.py partition_cert.pem sample_attestation.bin 
*************************************************************************
Usage: ./verify_attest.py <partition.cert> <attestation.dat>
*************************************************************************
verify_attest.py:29: DeprecationWarning: verify() is deprecated. Use the equivalent APIs in cryptography.
  crypto.verify(cert_obj, signature, blob, 'sha256')
Verify3 failed, trying with V2
RSA signature with raw padding verified
Signature verification passed!

Step #3: Parsing fields in the attestation

Looking at the remaining two scripts we can gleam how PKCS#11 attributes are encoded in general. Marvel has adopted the familiar tag-length-value model from ASN1 and yet is inexplicably not ASN1. Instead each attribute is represented as concatenation of:

  • Tag containing the PKCS#11 attribute identifier, as 32-bit integer in big-endian format 
  • Length of the attributes in bytes, also 32-bit integer in same format
  • Variable length byte array containing the value of the attribute 

One exception to this pattern are the first 32-bytes of an attestation. That appears to be a fixed-size header containing metadata, which does not conform to this TLV pattern. Disregarding that section, here is a sample Python script for parsing attributes and outputting them using friendly PKCS11 names and appropriate formatting where possible. (For example CKA_LABEL as string, CKA_SENSITIVE as boolean and CKA_MODULUS_BITS as plain integer.)

CP