Sunday, March 31, 2019

Inadequate Social Media Presence

As you can tell by my infrequent posts, I’m not that interested in social media. I don’t like (or use) Facebook, Twitter, Instagram, or much of anything beyond email. I may have set up a couple accounts years ago, but they have all fallen into disuse.

I decided to seek new employment. Now I see application forms prompting for my social media addresses, and letting me know that they’ll collect information from my social media sources.

In identity speak, my online identity is very sparse in the area of social media attributes. So, not liking other forms of social media, I’m going to try to breathe some new life into my old blog. I hope this proves mutually beneficial, and that we can learn from each other.

Thursday, September 29, 2011

Don’t Forget the Back-End!

Over the years we’ve spent lots of time worrying about the security characteristics of various types of authentication tokens, with broad consensus that static passwords reek. And we’ve put lots of thought into the processes we use to vet users’ identity and to bind tokens to users. And of course we’ve put lots of effort into lifecycle management and processes to disable authenticators when they are no longer needed.

NIST Special Publication 800-63-1 is a pretty good exploration of topics like those mentioned above, and describes how they contribute to an authentication event’s level of assurance. However, I don’t see any mention of back-end authentication systems in the NIST document.

We’ve progressed far enough that the authentication back-end systems have now become attractive attack points. Why should attackers try to steal someone’s smart card if they can steal the certificate authority’s certificate signing key? Why should attackers try to replay a SAML assertion if they can steal the IdP’s assertion signing key? Why should attackers try to steal someone’s OTP token if they can steal all the tokens’ shared secrets from the back-end OTP verification system?

Even if attackers are unable to steal OTP tokens’ secrets from a company’s back-end OTP verification system, they may be able to steal the token secrets from the token supplier, or some third party contracted to inexpensively program OTP tokens. Did RSA ever confirm our conjecture that their customers’ OTP token secrets were stolen in a recent breach?

Even if attackers are unable to actually steal secrets and/or keys (thank heavens for hardware security modules), they may be able to compromise the back-end servers to maliciously exercise the secrets and/or keys, thereby generating what appear to be valid certificates, assertions, or OTP values.

Even if attackers are unable to maliciously exercise a back-end server’s secrets and/or keys, they may be able to inject malware onto a RADIUS, LDAP, or OTP server that returns a success status for every authentication, bind, or verification request.

Don’t forget the back-end! Are your back-end authentication servers sufficiently hardened? Are they in secure network enclaves? Are your secrets and keys protected by HSMs? Are single-factor admin passwords used to control access to your multi-factor authentication systems? Are you confident that your virtual machine hypervisor doesn't open attack channels to your hosted authentication servers? Could compromised workstations used for remote administration introduce malware to your authentication servers?

Yikes! How do we adequately protect the back-end? And how should we include back-end considerations in determining authentication level of assurance?

Monday, August 8, 2011

Anything goodware can do, malware can do.

Over the past couple years I’ve been repeating these words so often that I’m now calling it Schleiff’s Law. It may seem presumptuous or vain for me to name something after myself, but I got tired of waiting for anyone else to do me the honor.

By goodware I mean software that is neither written nor used with malicious intent. And by malware I mean software that is written and/or used with malicious intent.

Much of my time is spent working on ways to better authenticate computer users to the services they access. It’s now generally accepted that passwords provide only nominal assurance about users’ claimed identity. We generally want to attain better assurance, hopefully without severely degrading usability, and at reasonable cost. The following paragraphs discuss how Schleiff's Law applies to various authentication methods:

  • Soft Certificates

    By soft certificates I mean X.509 certificates containing a public key, where the associated private key is stored on a computer's disk (as opposed to the private key being stored in a hardware keystore such as a smart card). I don't like soft certificates because the private key can be stolen or copied by an attacker, and the compromise is not readily evident to the rightful user of the key. Even if the key is not stolen/copied, malware on the PC could perform the same crypto operations that can be performed by goodware on the PC. Even if the private key is protected by a password, malware (e.g., a key logger) could capture the user-entered password, use the password to unlock the private key, and then operate with the private key for malicious purposes.

  • Hardware-Based Certificates

    By hardware-based certificates I mean X.509 certificates containing a public key, where the associated private key is stored on a separate hardware token, such as a smart card. Such credentials are generally accepted as the most secure for purposes of user authentication. I like hardware-based certificates on smart cards quite a bit. I use one every day in my day job, and I think that after a bit of practice, they are reasonably user friendly. However, even smart cards are susceptible to Schleiff's Law. Whenever a smart card is inserted into a PC's smart card reader, malware on the PC could submit requests to the smart card to perform private key tasks for malicious purposes. Even if the smart card is protected by a PIN, malware (e.g., a key logger) could capture the user-entered PIN, use the PIN to unlock the smart card, and then operate with the private key for malicious purposes. At least one vendor of smart cards and associated middleware provides software to detect inserted smart cards, and prompts the user to remove their smart card after a few minutes. Malware can only exploit a smart card's private key when the smart card is inserted into the smart card reader.

  • TPM-Based Certificates

    By TPM-based certificates I mean X.509 certificates containing a public key, where an encrypted version of the associated private key is stored on a computer's disk, and where the encryption was performed by a key in the computer's Trusted Platform Module (TPM) chip. The only place the private key can be decrypted for use is inside the TPM chip, thus protecting the clear text private key from theft (unless of course the whole PC including the TPM chip is stolen). Because the TPM chip is physically attached to a PC's motherboard, it cannot be removed from the PC, and is therefore always subject to malware that might be running on the PC. Even if the TPM chip is protected by a PIN, malware (e.g., a key logger) could capture the user-entered PIN, use the PIN to unlock the TPM chip, and then operate with the private key for malicious purposes.

  • PC-Based Biometrics

    I admittedly know little about biometrics. However, I think that biometrics relying PC-attached readers and/or PC-resident software are also susceptible to Schleiff's Law. If there's software on the PC to enable biometric capabilities, then malware on the PC could enable the same biometrioc tasks for malicious purposes. Even if the biometrics are stored on a smart card, malware on a PC could interact with an inserted smart card in the same way that goodware could.

  • One-Time Passwords (OTP)

    OTP systems generally rely on some sort of token assigned to a user, and in that user's possession. The token and the OTP management/verification system share a secret key. An algorithm that can be executed at both the token and at the management/verification system operates on the secret and another piece of dynamic shared data (e.g., an event counter, or time). Because the dynamic shared data changes on each use (or every few seconds if time-based), the algorithm produces a different result (i.e., a one-time password) on each execution. OTP tokens can be either soft tokens or hard tokens:

    • OTP Soft Tokens

      By OTP soft token I mean a piece of software running on a PC, or a smart phone, that knows the shared secret and can execute an OTP algorithm to generate one-time passwords. Execution of the software may require the user to enter a PIN. Such a PIN could be discovered by an attacker using keystroke logging malware. Malware could also invoke the soft token software to generate one-time passwords for its own malicious use. Or, the soft token could be stolen/copied from the user's PC (or smart phone) to an attacker's system to be executed whenever the attacker wishes.

    • OTP Hard Tokens

      By OTP hard token I mean a separate hardware token into which the shared secret is loaded, and from which the shared secret can never be extracted. The hard token also keeps track of the dynamic shared data (e.g., an event counter, or time), and can execute an OTP algorithm to generate one-time password values. The shared secret, protected in the hard token, is not susceptible to copying or theft (unless the whole token is stolen).

      • Without PC Connection

        Often hard tokens are in the form of a key fob, and usually with no connection to a PC so they are NOT susceptible to malware running on the PC (or smart phone).

      • With PC Connection

        In some cases OTP hard tokens include a USB connector, and can be connected to a PC. In another case, with Intel's IPT (Identity Protection Technology) the OTP function is provided by an Intel chip on the PC's motherboard. Such cases, because they are not fully air-gapped from the PC, warrant increased suspicion about the token's susceptibility to malware running on the PC.

  • Out Of Band Passwords

    By out of band passwords I mean the delivery of a shared secret to a user on some channel other than to the user's PC. Examples include sending a registration code to a user's home address, or sending a logon code via SMS or text-to-voice to a user's cell phone. Malware running on the user's PC has no access to such codes/PINs/passwords until the user enters the value into the PC, at which point it's probably too late to be of use to malware.

I believe the authenticators most resistant to Schleiff's Law are OTP hard tokens (without any connection to the PC) and out of band passwords. Of course, even though they may avoid Schleiff's Law, they are still susceptible to other attacks, most notably man-in-the-middle attacks.

Tuesday, June 14, 2011

Identity for Data - MyDataClaims

I've been tinkering with a new concept (at least it's new for me). It's a way to prove that you were in possession of particular data at a particular time. I hope people will find this useful to protect copyright on their creative works, or a way to protect ideas before revealing them to others, or a way to keep rights to your own ideas that you had prior to beginning work for a new employer that makes you sign away your rights to ideas conceived while employed there, etc.

Anything you can represent digitally (even a sketch on a napkin can be digitally photographed) can be given an identifier, and be tagged with descriptive attributes. And a set of attributes including at least one identifier constitutes an identity (in this case identity for data). I call this identity a DataClaim.

A DataClaim is a signed SAML assertion about some data, with an identifier based on a hash of the data, and including attribute assertions about the individual in possession of the data, a timestamp, and some other info. It will be interesting to get some feedback on this concept. Please take a peek at the following URL:


It's not quite in production yet, but the functionality is pretty much in place. After my upcoming vacation I plan to generate a new keys and certificates for signing the SAML assertions, and then I'll declare an official launch of MyDataClaims.

Let me know what you think.

Tuesday, March 17, 2009

User Identity Reference Model - March Update

A couple months ago we made another couple tweaks to our identity model (see v20 below), and nothing has changed recently. I think that means we're pretty satisfied with it's current state. We had a grueling couple months trying to figure out our direction for dealing with multiple personas. Now we're evangelizing our new direction and seeking support from around the company.

The new economy has curtailed most of my involvement in external activities. I'm now asked to focus more on internal projects. So to many of my friends, it may appear I'm hibernating until economic conditions improve.

Thursday, December 18, 2008

User Identity Reference Model - December Update

I'm a complete slacker in trying to lead an effort for the Concordia Identity Reference Model. This is due to lack of bandwidth, low levels or participation, and my own frustration at conceptualizing things differently than most other people.

At my day job we continue to use and evolve the model. We used it to facilitate discussion about Testing IDs, and to illustrate our determined approach. In November we shifted discussions from Testing IDs to individuals with Multiple Personas.

Here's today's version of the model:

Wednesday, November 12, 2008

My Preferences for the User Identity Reference Model

I haven't yet received any feedback on this version of the diagram, but it represents my current preferences.