JOIN OUR BLOG BLOG

STAY AT THE CENTER OF IDENTIVERSE


Vectors of Trust

Anyone working in the identity industry will tell you that identity is difficult to define. Anyone who says it’s simple is trying to sell you their definition. Even harder than a universal definition, and perhaps more important, is a consistent means of measurement. People can still log in if we disagree on the details of identity, but if your idea of a strong identity doesn’t line up with mine, we’re going to have a difficult time connecting.

 

Vectors of Trust (VoT) attempts to define a way to measure identity that’s balanced between expressive detail and simplicity of processing, and the standard was recently published as RFC8485 [https://tools.ietf.org/html/rfc8485]. VoT can describe how a user’s account was proofed, how their credentials were proven, how those credentials are managed, and how that whole bundle of information should be carried between systems. These claims are encoded separately like P2 or Cc and conveyed as a set like P2.Cc.Ma.Aa.. VoT represents a shift in thinking within the identity industry on how to measure and reason about identity. To understand where we are, it’s important to understand a bit about where we’ve come from.

 

Nearly a decade and a half ago, the U.S. federal government came up with a decent attempt at measuring identity, enshrined in the Level of Assurance (LoA) scale [https://csrc.nist.gov/publications/detail/sp/800-63/2/archive/2013-08-29]. As users proved their real-world identities more strongly, the credentials they would use to log in would be stronger as well. These requirements were distilled into a simple four-point scale from low to high. For the government’s use cases, the LoA model’s simplicity makes sense, but many other valid use cases don’t fit. For example, let’s say there’s an anonymous protester: you’d want to be very sure that the account is fully authenticated, but it would be counterproductive to strongly verify their name and address and tie that to the account. Furthermore, there are different ways to interpret a situation and assign a level to an account, meaning one group’s LoA2 isn’t compatible with another group’s LoA2. Other groups might need more granularity than the four points in the U.S. system, inventing LoA2.5 to cover these.

 

Alternatively, if we say that an identity is a collection of attributes, we could reason across these attributes to figure out how much to trust the user. These attributes include everything from the user’s name and email address to physical attributes like their eye color. Each attribute can itself have attributes, such as how it was collected, who verified it, its provenance chain and when any actions were taken on it [https://pages.nist.gov/NISTIR-8112/]. This approach is incredibly expressive and allows for deeply customized and reactive policies. While our protestor would be readily accommodated by such a system, most transactions don’t need that level of detailed reasoning. It’s not helpful to make the simple cases pay the full cost of the complex cases.

 

VoT gives us something in the middle, allowing us to describe an identity across a small number of orthogonal dimensions in much the same way a mathematical vector can describe measurements in multiple dimensions in a single construct. Like LoA, VoT presents a value for the identity transaction as a whole, and not the individual attributes. Unlike LoA, VoT breaks the measurement of that transaction into several constituent parts instead of lumping everything into one bucket, but not at the granularity of individual attribute metadata.

 

Measuring Identity with VoT

 

VoT defines four core categories that describe how an identity is measured. Identity Proofing describes how the real-world information about a person was verified for an account. Did the user simply claim all the attributes, or did they have to show an authorized agent a set of verifiable official documentation? Credential Usage describes the authentication activities that have led to an account being “logged in”. Did we just recognize their browser cookie, or did we ask them to prove possession of a cryptographic device with a biometric unlock capability? Maybe multiple factors were used together, and we need to list them all simultaneously. Credential Management describes how the binding process between the proofed identity and the authenticators is executed. Can anyone sign up for a new account, or are the credentials only issued in a limited context? Assertion Presentation describes how all of this information is asserted across the network in a federation context. Did the transaction and all of its attributes get delivered through a web browser, or did we also validate a secondary authentication key?

 

These categories are the same basic inputs that went into determining LoA historically, and the latest version of NIST’s SP 800-63 [https://pages.nist.gov/800-63-3/] now divides identity measurement into identity proofing (IAL), authenticator credentials (AAL) and federation assertions (FAL). The similarity of these categories to those found in VoT is completely intentional.

 

Unlike LoA, the categories can vary independently from each other, and an individual component can be omitted from a vector entirely. An account can now be strongly bound to a verifiable credential without containing any identifying attributes. VoT concatenates values representing each aspect we want to convey into a single string and conveys that string to the relying party (RP) in a cryptographically protected assertion. The RP can easily split this string apart and process the individual values to determine how much it trusts a given vector.

 

What if your use case needs to express something subtly different, or what if you need to convey a different dimension? Each vector in VoT is interpreted by the RP in the context of a trust framework, which defines what all the parts of the vector mean in terms of policies and business rules. VoT provides a trust framework with component values in Appendix A of the RFC, but users of VoT are encouraged to define their own trust frameworks with more specific detail. A trust framework can even invent entirely new categories beyond the four general-purpose ones defined in RFC8485.

 

VoT is an attempt at striking a balance between expressive detail and simple computability. This shift in thinking is important. As identity becomes a cornerstone of more and more systems and our thinking about what identity is evolves with these systems, our need to measure identity and trust in a comparable fashion is going to become more pervasive. Privacy concerns dictate how we express our knowledge about users while security concerns require more concrete expressions of trust. Like the merchant’s scales of old, it’s all about precise balance.

View More Posts

Leave a Reply

Your email address will not be published. Required fields are marked *