Search
brian campbell

SE Radio 526: Brian Campbell on Proof-of-Possession Defenses

In this episode, Brian Campbell, Distinguished Engineer at Ping Identity, speaks with SE Radio’s Priyanka Raghavan about cryptographic defenses against stolen tokens, particularly in the context of the OAUTH2 protocol and the type of attacks that can plague it. They discuss the concept of “proof of possession” in protecting against such attacks, and where it is important to have this extra security — in banking applications, for example — despite the additional costs of including it. They then take a deep dive into the OAUTH2 MTLS protocol and its two flavors: self-signed certificates and PKI certificates. They conclude with a discussion of the DPoP (demonstration of proof-of-possession) RFC and its suitability for use in the user interface layer, as well as the future of OAUTH2 including Google’s macaroon tokens.


Show Notes

Related Links 

Transcript

Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Priyanka Raghavan 00:00:16 Hello everyone. This is Priyanka Raghavan for Software Engineering Radio. And today my guest is Brian Campbell. Brian is a Distinguished Engineer at Ping Identity where he’s responsible for a variety of products and designing them like PingFederate, the Open Source JWT library, Jose4G, and mainly he’s here on this show because he’s a co-author on various IETF specifications. And I just went on the IETF spec, and I was like researching Brian before the show. And I noticed that he’s been a part of specifications, right from RFC 6755, which was in 2012 to now, which is 10 years, that are three latest RFCs on OAuth 2.0. He also serves as an Advisory Board member on Identity verse and has talked at various security conferences and written blogs and talks on authorization and identity extensively. And today we mainly going to be talking about cryptographic defenses against stolen tokens, and I thought what better guest than Brian to have on the show. So welcome, Brian. I’m really looking forward to this chat.

Brian Campbell 00:01:33 Oh, thank you, Priyanka. I’m happy to be here. Thanks for having me on.

Priyanka Raghavan 00:01:36 Is there anything else you would like listeners to know about you before we start the show?

Brian Campbell 00:01:42 No, I think you covered about everything and probably more than I really am. So, thanks for the kind intro.

Priyanka Raghavan 00:01:47 So let’s just begin this journey. One of the things that we’ve done at Software Engineering Radio is we’ve actually talked a lot to the previous episodes on identity, but also on authorization. So we’ve done a show on OAuth2 in 2019 with Justin Richard, where we mainly looked at the OAuth2 in action. I was done by one of the hosts and they really went into the details of the OAuth2 different grant types, et cetera. And they just kind of picked into these defenses against stolen tokens. But increasingly in the news, we’re saying so many attacks happening on stolen tokens. And I thought, okay, this would be a good show to actually focus a little bit on how we can defend against such type of attacks. So, before we actually get there, one of the things I wanted to do was a recap for our audience on, in your own words, can you tell us what the OAuth2 protocol set out to do and the problem it was trying to solve?

Brian Campbell 00:02:48 Sure. Or I can try, it’s actually sort of a deceptively difficult question to answer in any kind of synced or meaningful way. And as you pointed out, you did a whole show on it that goes in the details, but let me try. So OAuth is an open IETF standard authorization protocol, or really it’s called a framework because it is pretty open ended. And the main idea is it allows a user, an end user to grant access to their own private resources stored on some site to a third-party site or application, but it grants had access without having to give up their username or password or any of their own actual login credentials to that third party. Those resources usually are exposed via some kind of HDP API. You can be things like your calendar, data contacts list, the ability to read or write your status updates on a social site could be bank account info, really whatever.

Brian Campbell 00:03:41 And the problem that OAuth was primarily trying to solve was enabling that kind of access without requiring users to share their passwords across different sites, which is less of a problem nowadays because of OAuth, but it was increasingly becoming problematic at the time that this started, where you were seeing websites ask for your Gmail address and password so that they could read your contact list, which that practice in itself is, is one thing. But in order to do that, you were basically giving that third party site access to your entire account to do whatever. And OAuth comes along and tries to make that sort of thing possible in a more constrained way that delegates limited rights to that client or application. And so what happens is typically a client, which is the OAuth term for the third party application, sends the user via a browser to the authorization server, which is another OAuth term.

Brian Campbell 00:04:41 And the authorization server is the component that renders user interface for that user through the web and authenticates, if they’re not already authenticated and asks the user to approve the access that that client application is, is asking for assuming that all goes well, the authorization server redirects back to the client, including what’s called an authorization code, which is just a little artifact that the client turns around and exchanges directly with the authorization server to get back some tokens, typically an access token and refresh token. But these tokens that represent then and are the credentials for making this limited access and the client can then use the access token to make API calls at what’s often called the protected resource of the resource server, but that’s the private resources that the end user has granted access to. OAuth has become and is a lot of other things as well. But that’s sort of the main canonical use case and flow and how it works, and the entities involved and their names in the OAuth parlance.

Priyanka Raghavan 00:05:45 Great. Another thing that you talked about is a token, right? So if you talk to any developer, like a newbie developer who comes and you ask me, what’s OAuth say that’s JWT token? So could you just maybe explain what’s the difference between a JWT and a bear or token, are they the same thing?

Brian Campbell 00:06:04 They are the same thing and they’re different. In fact they are basically different classes of things. So, comparing them like that is a bit of an apples and oranges comparison. Although JWT is a token format that was developed in the same working group. I mean the IETF that developed OAuth, which I think only further compounds that confusion, but JWT is a token format. It’s a style of token that contains the information in whatever is meant to be conveyed in the token. Usually information about a user called claims in JSON as a payload of a token that’s encoded and then typically signed. So it becomes a cryptographically secured token format, that is most often a bear token. Most often used as a bear token, doesn’t have to be, but a bear token is more of a concept or a classifier and not a format itself.

Brian Campbell 00:07:01 A bear token is just any kind of token which can be used without any further proof of anything. Bear, meaning the holder of it, a bear token is any kind of token that you can just show up and use, and that alone grants access or is considered valid. So, they’re related, but different, as I said, most JWTs, as they’re used in practice today are in fact bear tokens though. They don’t have to be, but bear tokens are a broader class of things in OAuth. The actual token format itself is undefined. So, there’s a lot of OAuth deployments that pass around tokens that are just sort of long, random strings that serve as a reference to the actual data elsewhere. And those can be presented as bear tokens as well, either way. It’s just what makes it a bear is the act of presenting it as all of it’s needed to use it.

Priyanka Raghavan 00:07:55 One of the talks I listen to that you give it’s called the Burden of Proof. And one of the things that struck me in that, and what I’m thinking about is when you said the bearer, you can use the JWT, anybody who presents it, the bearer can accept different types of tokens and JWT is one, would it be similar to say a currency?

Brian Campbell 00:08:14 Yeah, that’s one of my favorite references and certainly I didn’t come up with it, but a bear token in a lot of ways is equivalent to cash. So, if I have a $5 bill, I can present that and use it to buy services anywhere. But if you steal my $5 bill, it’s just as good to you as it was to me, you can use it to buy things at a store and there’s no additional checks beyond simply holding that token to consider it valid.

Priyanka Raghavan 00:08:41 And I think that probably plays into my next question, which is to kind of define the replay attack. So, I guess that’s when it happens and that’s scenario that you can just steal a token, a bear token, and then the attacks happen.

Brian Campbell 00:08:53 Yeah. So, whatever, I have a hard time with the word replay attacks just because I think it’s used by a lot of different people in a lot of different ways to mean different things. And I’m not sure I have my head wrapped around one meaning that I really can stick to. But in general, I think it means the use, the play, the replay, the use of a bear token by some entity for whom it wasn’t originally intended. And that could come about from attacks on the OAuth protocol itself, where there’s been issues with the way that the redirection URIs are validated that lead to token leakage, whole variety of different things like that, that result in ways that despite efforts to protect them from leakage, tokens do leak and do get stolen. More recently, there was news around, GitHub and some of, I don’t know the exact details, but some third-party sort of automation tools integrating with GitHub had tokens stolen from them.

Brian Campbell 00:09:53 I think they were just stolen from storage at rest, but either way, and sometimes tokens leak in through log files or sort of despite our best efforts they do sometimes leak out and a replay attack then would be the use of that token after the fact. And because they’re bearer, as we’ve talked about, whoever has the token, the thief then can use it as though they are the legitimate holder of it. And that’s not the right word, but there’s nothing preventing a thief from using a token regardless of how it was obtained.

Priyanka Raghavan 00:10:26 I think that I can clearly now understand the problem that we are trying to actually look at. But one of the things before I dig deeper into this is I did see that in blogs, not only by you, but also by other security experts or the people in the IETF, they’d say that majority of times, and the popularity of Co Op is because a bear token is maybe enough for most of the cases that you’re doing. So, can you just explain that a bit?

Brian Campbell 00:10:55 Yeah. And it’s sort of a fine line and it’s almost a hard thing for me to say and advocate for, but we do hear about attacks in the news. Things happen, there are problems with it but, what doesn’t make the news is the vast majority of stuff you do every day online is probably somehow protected by a bear token, whether it’s sort of classical OAuth, which you probably use online very much every day to just regular old HP web sessions that are granted to you after you authenticate with a site, those are most certainly in almost all cases, bear tokens, just like a session cookies. Usually only a bear token, most OAuth tokens are usually bear. And there are many things in place already that protect against their leakage or their theft. And for the most part, it works okay.

Brian Campbell 00:11:48 It’s not to say it’s perfect, but the point is the vast majority of stuff we already do is based on bear tokens. And while there are some problems, there are some leakages, the world hasn’t come crashing into an end and it supports itself pretty well for the majority of what we need to do every day. So having something more than that is nice, it adds defense in depth, but it’s also proven to be somewhat difficult so that I think there’s a combination of it’s pretty good, almost good enough. Versus the complexity of doing more has kept us in a space where bear tokens really are kind of the mainstay and in many ways that’s okay. It’s usually okay. It’s not preventing some of us from trying to facilitate more, but it’s not an end of the world kind of scenario. It’s a, could be better kind of scenario, but in most cases, it’s probably all right.

Priyanka Raghavan 00:12:42 The reason I was asking for that was also to talk a little bit about this concept of a proof-of-possession. Maybe you could talk to us about it because of your long history with the IETF. So appears that this is not something new. It’s been there around for quite some time. For example, if I look at one of these token binding protocol Version 1, I think it is, 8471. I saw that it’s been talked also. It was also talked about in OAuth1. So maybe you could just give us a brief history of this. So obviously all of you have been discussing this for a long time and it’s not something new. So could you just walk us through that a bit?

Brian Campbell 00:13:21 Yeah. So, proof-of-possession, and unfortunately it is often referred to by different names, but different people usually meaning generally the same thing, but it sort of confuses the space and confuses me anyway. But proof-of-possession generally means or describes the idea that you’re somehow demonstrating that a party that’s sending a message is in possession of some particular cryptographic key without directly exposing that key. So it’s really just some kind of exchange or protocol that shows that the original message sender, possesses some cryptographic key. And that in itself doesn’t do anything other than show possession of that key. But what you have attempts in OAuth and other areas is to then bind the issued tokens to that key. So that when, and then we, we generally refer to those as pop tokens or sender constrained tokens or something like that. But the idea then being that there’s something in the token, then that says I’m more than a bear token in order to accept me as good enough.

Brian Campbell 00:14:41 You also have to ensure that whoever’s showing up with me, proves possession of this associated key. And what that does is prevent the token from being used by someone who does not possess the key. And in turn prevents the kinds of replay attacks, assuming it’s all implemented and done correctly prevents the kind of replay attacks we’ve talked about, unless the key too is somehow stolen, but typically keys are treated more securely. Oftentimes even in hardware, non-exportable, it’s much, much less likely for those keys to leak. They’re not sent over the wire. So, the opportunity for that kind of compromise is much lower than compromise of the actual token itself. And by combining some proof-of-possession of the key with a binding of that key to the token, you’re able to defend against not the theft of tokens, but of the use of the tokens in some kind of malicious way after the fact.

Brian Campbell 00:15:42 And it all sounds nice, but it turns out that it’s pretty difficult to do reliably. And there’ve been a number of different attempts to do something like that. As you mentioned, OAuth1, didn’t have exactly that in it, but it had a mechanism where it combined a pseudo sort of bespoke signature over to the HTTP request with the token and a client held secret, which gave you something like proof-of-possession of that client secret that proved very, very difficult to implement correctly, not so much because of the signature itself, but because of the need to normalize the input into the signature, trying to normalize HTP requests turns out to be a really, really difficult problem. That’s hard to get right and so there’s lots of neatly nitpicky kind of interop problems around trying to do those signatures. You’ve been a number of different ways of attempts of doing it.

Brian Campbell 00:16:41 You mentioned the token binding protocol, which did become an RFC, and there’s a couple other related RFCs that went with it, which was sort of a novel and promising for a while, effort out of the IETF, including some very major players in this space. Ironically, not to actually bind tokens, but to provide a mechanism for proving possession of a key pair, client generated key pair using both, TLS and HDP in a way that the use of this protocol was negotiated in the TLS handshake. And then an HTTP header was sent on every request that included a signature over the exported key material from the, the TLS layer, which was a nice, is a weird violation of layers, but a nice tight binding between the two of them as well. And so basically you were proving that the client possessed this key pair over this TLS connection and the association be requests on top of it.

Brian Campbell 00:17:44 And then in turn the idea was that applications at the next layer OAuth for example, could bind their tokens issued to the token binding key pair provided by the lower layers. And there were many people too that were envisioning binding their session cookies to those protections as well. And the way that it worked at the different layers was sort of promising because it was a, it was a somewhat novel approach to providing this. And it was based on some work that Google had done previously around channel binding and some other things and their browser with some experimentation. It was certainly an attempt to look at it at least to provide the lower layer of infrastructure for doing proof-of-possession type of work, but the RFCs were published out of that working group, but there were a number of things that led to basically just non adoption of it.

Brian Campbell 00:18:36 And while they are standards, they aren’t actually widely available or that’s an overstatement they’re really not available in, in practice today in any platform or browser or really anywhere. So unfortunately, one of those sort of standards efforts that just didn’t take didn’t take in the long run and the world certainly littered with standards that didn’t actually get implemented. And token binding unfortunately I think was one of those, but is demonstrative of the difficulty in actually making this work in a standardized way for everyone and how difficult the problem itself can be. And the efforts that have gone into trying to find some solution for it over the long run.

Priyanka Raghavan 00:19:14 This is quite insightful actually. And one of the things I wanted to ask you was mutual TLS, which we hear a lot in the service mesh world out that inspire you to, I mean, I guess the group to think about using this on top of OAuth2, which is of course widely popular. Maybe can just dial back a bit and maybe just give us one or two lines on MTLS and then why did you decide to tie that in for this proof-of-possession?

Brian Campbell 00:19:39 Yeah, let me try to do that. So TLS is, I’m sure most of your listeners know already is the secure transport protocol that underlies HTPS, and we use it all the time. And it’s how websites authenticate themselves to us using the web browser. So during the TLS handshake, when the connection set up, a bunch of cryptography goes on, including the presentation of a certificate that says who the website is, and that’s how we authenticate the sites that we’re talking to. And that’s sort of normal TLS, but TLS also provides an option for the client to provide a certificate during the handshake and prove possession of the associated private key. So it’s not just sending a certificate, it’s sending a certificate and signing bits of the handshake to prove that it possesses the associated private key. So it’s, and typically then used in a manner to authenticate the client, but is also a proof-of-possession mechanism for a public private key pair as well.

Brian Campbell 00:20:43 And there were the long history of trying to do some kind of proof-of-possession in OAuth and other related identity protocols before that, fell in conjunction with a number of regulatory pushes in various areas, largely, but not exclusively coming out of Europe that were demanding that big banks open up their services as open or openish APIs to facilitate financial growth and incentivize innovation around using banking APIs for FinTech and so forth. But coming out of a government regulation basically saying do open banking, make bank APIs available and open. And as you probably know, banks are rather conservative in their security posture. And one of the desires was to have a legitimate proof-of-possession mechanism for the presentation of OAuth tokens to those open banking APIs. It was all the open banking, not all, most of it was based around OAuth for the issuance and consent and delivery of the tokens, but they also wanted more than bear.

Brian Campbell 00:21:55 They wanted a proof-of-possession mechanism there, and this was all happening around the time that token binding working group was working on this stuff. There was a lot of promise there, and folks were interested in it, but it was not mature and ready to be used. And despite all the complexity of proof-of-possession, TLS and mutual TLS are actually a pretty hard one and long-standing mechanism that exists today with deployments that can inter operate that does a proof-of-possession mechanism. And so it made sense sort of pragmatically to try to build a profile of OAuth using mutual TLS, to achieve some level of proof-of-possession, as well as a higher level assurance of doing client authentication between the client and the authorization server, and then doing a binding of the tokens to the certificate itself, which gives you the same proof-of-possession properties and so forth.

Brian Campbell 00:22:52 So it, for a long time, I called the mutual TLS OAuth works sort of a store brand version of token binding, because I envisioned token binding as being kind of the cool long term new way to do it. Didn’t realize it wasn’t going actually go anywhere but considered the mutual TLS stuff sort of like a short-term pragmatic interim solution to provide for this. And maybe it’ll have longer legs because of the way things have happened. But we began work in the IETF OAuth working group to specify exactly how mutual TLS could be used in conjunction with OAuth or layered on top of OAuth to achieve bound tokens and client authentication using well known existing deployable technologies today. And it was ratified as an RFC. Ratified is not the right word, but I use it here and has been used and deployed in a number of those open banking type scenarios that I describe and more broadly as well. So it provides a workable solution today.

Priyanka Raghavan 00:23:54 Interesting. So, the adoption rates are pretty good is that what you see?

Brian Campbell 00:23:58 Yes, although it remains fairly niche. Mutual TLS is a technology that works and is proven, but is rather cumbersome to deploy and manage and has a lot of other drawbacks. It’s cumbersome to say the least, but it’s use in conjunction with browsers is rather fraught as well. It has a pretty poor user experience. And so it’s often not at all used with browsers. So, I guess that’s to say it has been used, there is deployment out there, but it’s these niche deployments that really had a strong need for this higher level of security. It solved the problem for them, but they’re also the kinds of places and institutions that can afford the investment to manage this harder, more complicated, more cumbersome deployment of MTLS.

Priyanka Raghavan 00:24:48 Sure. So, what you’re saying is that if you were to use OAuth2 MTS on a browser, then it’s probably the user experience is not as smooth as what OAuth we used to?

Brian Campbell 00:24:57 Yeah. It’s worse than not as smooth to the point where it’s almost unusable. So, unless you’re in a, I think a constrained enterprise environment where maybe the enterprise is provisioning certificates out to your machine and, and all that sort of taken care of for you, the user experience with MTLS sort of on the open web and a random browser is just it’s prohibitively difficult. And it presents the users with selection screens around certificates that are confusing and meaningless even to people who spend time with stuff and kind of know what it means and just really a non-starter for kind of the average user. It’s just not a viable solution for anything where the OAuth client itself is running in the web browser or for that matter for anything where the web browser itself interfaces with and is asked to provide a client certificate. So, you can still use mutual TLS in cases where the sort of server-to-server componentry is doing all that. And the end user interface stuff is presented via normal HTTPS, but anytime you want to move the client authentication into the web browser, it’s just really a non-starter for most cases.

Priyanka Raghavan 00:26:16 I was going ask you something else, whether something struck me now, like one of the things that we do with this service-to-service call is we use this thing called client credential floor, right, in OAuth2. So maybe is this place where the OAuth2 MTLS could come in for when you’re trying to do something really secure, like what you’re saying is backing transactions?

Brian Campbell 00:26:33 Yeah. It’s one option. As you know there’s a lot of different grant types and ways to obtain tokens in OAuth, but client credentials being one where there’s not really a user involved, it’s just one system getting a token from the other system. And that’s typically used where the client system is an actual website. So yes, it would be appropriate there for that client website to use mutual TLS as its client credentials, to authenticate with the authorization server and get a token issued for it. But you can also use mutual TLS OAuth in the cases like the canonical case I described before, where the users bounced around through a browser, but the client itself is a website. So, the browser presents a normal TLS connection to the end user. But the communication between the client website and the authorization server website and the resource server website is all done mutual TLS. So anytime it’s server to server, mutual TLS works okay. It’s when that connection bleeds over into the web browser, that it becomes problematic from a experience standpoint.

Priyanka Raghavan 00:27:39 So I wanted to ask you two things from the spec. When I looked at it, it looked like there are two flavors of client authentication. One was you could use the regular PKI, which we all know about, and then there was the self-signed certificate. So maybe you could just tell me a little bit about this self-signed certificate and what is that? I mean, it’s just the thing that we usually do that the client has the self-signed certificate, and then there’s a lot more work involved there or instead of using PKI?

Brian Campbell 00:28:10 The idea was to provide two different ways of doing it to try to actually accommodate different deployments and actually maybe reduce some of the ease, not with the browser issues and usability, but with deployment and management of a TLS and PKI infrastructure. So, with the PKI based approach of authentication, you have your client configured or set up in your authorization server, and you say something about its subject that you expect to authenticate through mutual TLS. And then during the TLS handshake, the certificates validated up to a trusted anchor. And then if the certificate contains that particular subject in whatever form, then that’s considered valid because you both have who the subject is. And that this whole certificate chain was issued by a trusted authority, which works. That’s kind of how we normally think about TLS and PKI, but with the self-signed option, we wanted to give an option where the certificate itself was really just sort of wrapper metadata, unused data around a key and a key pair.

Brian Campbell 00:29:17 And rather than setting up a name that you expect out of the certificate to authenticate what you do is configure that client with the full certificate and then during authentication, the mutual TLS occurs. And in order to authenticate that client, you then have proof that they possess the associated key. And you just make sure that it’s the same certificate that you’ve configured to be expected from them. And by doing this, you sort of provide an alternative path of trust. It’s more like just an out of band key exchange than reliance on a third party trust anchor PKI being set up, and it can be easier to deploy and manage because you don’t have to deal with the PKI. You’re just dealing with the exchange of certificates more on like a pair wise basis. It’s sort of like saying for this is the client’s particular secret, but in this case, this is the client’s particular key pair wrapped in this self-signed certificate.

Priyanka Raghavan 00:30:14 So like in a deployment architecture, maybe where these services are inside of trusted virtual network or something. I could probably use this kind of a scenario where I don’t need to get out everything’s within my network. And so I could use a self-signed certificate then in the MTLS world.

Brian Campbell 00:30:33 Yeah. But even in an open deployment, the self-signed certificate is sufficient because the trust is established through the registration of that certificate for that particular client. So, it doesn’t have to be a closed environment to facilitate it. It’s just relying on a little bit at different trust model. And then you have to, things have to be set up such that your servers will accept any trust anchor. They basically are told to turn off validating the trust anchor. And so that it, what it does is it sort of takes away the authentication piece from the TLS layer, because there’s no chain walking or trust anchor validation there and switches it over to really just being a proof-of-possession mechanism of that key during the handshake and then OAuth layers on top of that and says, okay, great. You’ve proven possession of the key is that in fact, the key that I’m supposed to get for this client, if so authenticate good, if not authenticate bad, but it moves or changes what it’s getting from the TLS layer to just being about proof-of-possession in the key.

Brian Campbell 00:31:38 And then the key itself becomes the authentication mechanism that’s compared at the higher layer in OAuth itself. And then I maybe jump ahead of your next question. I don’t know, but regardless of which of those is used, the actual binding of the issued access token binds it to, it takes a hash of the certificate that was presented regardless of whether it was PKI or self-signed base and associates, a hash of the certificate with the access token. If it’s a JWT, it includes that as a claim within the token itself, if it’s a reference style token, it’s just stored server side and could be retrieved via database lookup or commonly through introspection, which is a way that OAuth exposes in a standardized base way for resource servers to find out information about validity and meta information associated with the token. It really ends up just looking a lot like the Json payload of a Jot, but it’s a different way to obtain it and not in the token itself. So, but either way, the certificate is sort of attached to the token by binding a hash of that certificate to the token itself.

Priyanka Raghavan 00:32:49 Actually, that was going be my next question, just to ask you, how does the JWT token structure get modified? So that’s the way you say that you include the certificate and have a hash of that in the JWT structure. And can you also clarify the introspection column? I mean, you’re saying that, so in case you didn’t want to do that then make, do have the introspection call or?

Brian Campbell 00:33:12 Yeah, this is more sort of general base OAuth. There’s really two main ways that token validation and information from the token is extracted for the resources to use. One is to include it directly in the JWT and the resource server, validates that and extracts the information from it directly. The other method that is standardized in an RFC is to do what’s, what’s called introspection, which is, I guess, sort of a misleading name, but really all that is, is a callback is that the resource server receives this token and makes a call to the authorization server that says, Hey, is this token valid and can you tell me what’s in it? And the response is a chunk of Json that for all intents and purposes, is almost equivalent to what would be the payload of a Jot. It’s just a bunch of JSO claims that say information about the token, who the user might be, the client that’s using it, any other data that that resource might be needing based on configuration. But so either way with the certificate binding, there’s a hash of the certificate included in the token and it’s either obtained directly from the token or through introspection. But it looks the same in the Json either way, it’s underneath a claim that’s called the CNF confirmation claim.

Priyanka Raghavan 00:34:35 CNF?

Brian Campbell 00:34:36 CNF short for confirmation. And then one, itís getting into some of the minutia of all this, but there’s a CNF with something under it, that’s the X5. I can’t remember even it’s the, an indicator that this is the hash of the X5 certificate. And so ultimately the resource either gets that directly from the Jot or through introspection. And then it’s expected to compare that certificate hash to the certificate that was in turn presented to it during a mutual TLS connection from the client on making the API calls. And that’s what does the associated check for proof-of-possession, the mutual TLS proof-of-possession of the key. And then the check of the hash proves that this token was issued to the holder of that key itself. And there you get the proof-of-possession check on the token. The other side of that, being that if you didn’t have the TLS key, you couldn’t make that connection. And so if you try to present that token without that key or with a different key, the certificate hashtag check would fail. And you could reject that token, thus preventing so-called replay by, by asking for proof-of-possession, using a lot of the same words over and over again,

Priyanka Raghavan 00:35:55 To me, it’s now the story seems very beautifully complete, like a circle. Like I can understand that I’m just to kind of reiterate, so one of the things now I can see why it’s becoming expensive, because now with every one of these calls, you would have to do this check as well. Is that something you’d like to talk about? The expensive part of the security? I think you’ve already addressed it because that’s the reason because it’s only on certain domains, but is that when I’m designing an API spec? So, should I be looking at places where there’s more chance of data leakage or something that I really need to protect and that’s where I would use the OAuth2 MTLS?

Brian Campbell 00:36:32 So, the value of OAuth2 MTLS is really protecting against the use of leaked or stolen tokens. So yes, whatever your API is so subjective, but if you consider it high value, if it’s something that’s really important to protect against malicious usage, then something like OAuth MTLS prevents access to that. Even if those individual tokens are somehow leaked or stolen or whatever. And because of things, like I said earlier, like banking is one area that considers fairly high value. So that was an area where it made sense to apply it. But there’s certainly others and it’s a reasonable solution to prevent against that kind of malicious reuse of tokens, no matter how they may have leaked. From a cost standpoint, I think the main cost comes in sort of getting it up and running and maintenance of the mutual TLS infrastructure itself.

Brian Campbell 00:37:33 It’s just, it’s just proven to be not trivial over time. And maybe someone will come along and solve that, but I’m not aware of many people that have in terms of a cost transaction or a run time. It’s not particularly more expensive because the costly operations occurred during the handshake. That’s where the proof-of-possession of the keys is occurring. And the more expensive cryptographic operations, which are the public key operations occur at the handshake. After that it’s more or less just normal TLS. And while you do need to do the hash check against the certificate on each call, that is itself relatively inexpensive, you just hash something and compare hashes. It needs to be constant time and all that, but it doesn’t add much cost overhead sort of on a marginal case by case or transaction- transaction basis. The cost is really more in the overall design and deployment and maintenance of the system.

Priyanka Raghavan 00:38:32 So the responsibility of the validation sort of at the time of the handshake and then yeah.

Brian Campbell 00:38:38 Yeah, it’s split, but the expensive part of the validation occurs at the handshake and sort of the, the secondary, the cheap check occurs on the token validation where you’re just, just comparing a hash to make sure the certificate on the underlying connection presented by the client matches the one that, that the token was issued to. But that again is relatively inexpensive.

Priyanka Raghavan 00:39:01 I think that’s a good segue into the next part, which I wanted to ask you a little bit about the demonstrating proof-of-possession at the application there, the DevOp, which I didn’t really do much research on, but I just wanted to ask you about that. What is that?

Brian Campbell 00:39:14 Yeah, so it’s yet another attempt at defining a proof-of-possession mechanism, but it is one that’s on the track to becoming an RFC within the IETF. And it was really born out of some of the limitations and difficulties around using MTLS for this stuff, as well as watching the, the demise of the token binding work, where a lot of people had placed their hopes in being able to use that for applications in OAuth. With those things sort of being unavailable or to niche for deployment in a lot of cases, including within the browser. As we talked about before, MTLS doesn’t work very well there. Some of us got together and began working on a proof-of-possession type approach that could be done as the name implies all at the application layer. So rather than relying on lower layers, layers of TLS, it’s using signed artifacts passed around at the HP layer.

Brian Campbell 00:40:16 And I don’t know how much detail I want to get into here, but basically with DPoP there’s a mechanism where the client signs a Jot that ultimately tries to prove possession of a key pair, similar to many of the things we’ve talked about here, but it does it by signing a Jot that is nominally related to that specific HTTP request. So there’s a Jot that includes the public key; it includes the URI to where the HTTP request was being sent; some timestamp information; and some other things to sort of show that it’s fresh. But the end result is that the receiving server can validate that and have some reasonable level of assurance that the client sending that HTTP request also possesses a private key that the public key was referred to in the request itself. And then using that, which is it’s just sent as a, an individual distinct header, surprisingly called DPoP because we’re great with names, but that provides the proof-of-possession mechanism, which in turn OAuth uses to bind tokens to the associated key, using very similar kinds of constructs as the mutual TLS stuff.

Brian Campbell 00:41:28 But instead here it uses a hash of the public key rather than a hash of certificate. And then on API type requests, the same header is sent in conjunction with the access token. So, you get some proof-of-possession of the key in that header and you get then a token that’s bound to the key. So there’s the same kind of check between the hash of the key in the token to the key that was presented itself, which ultimately then is a mechanism that prevents that token from being used, unless it’s also accompanied by this DPoP header, which in terms is showing that the calling client possesses the key and prevents misuse or, or use of tokens by unauthorized parties and in very much the same way as the mutual TLS stuff does, but it does it all sort of where the name drives from at the application layer or at least at the, they should be application and OAuth application layer by using these signed artifacts rather than relying on the lower level layer of TLS. And also then avoids things like the problematic user interface experience in a browser with mutual TLS. It’s, it’s much more suited for that kind of deployment because it doesn’t run into those kinds of issues.

Priyanka Raghavan 00:42:42 That’s very interesting. And also I can clarify the use as well. The other question I wanted to ask you was also about these token revocations right now. Anything changes there or is that because of using these protocols or because I think anyway, these are, they’re not long lived, right?

Brian Campbell 00:42:59 They’re typically not long lived all the issues of token revocation versus length of token lifetime, how revocation might be understood. It’s really unchanged. They remain potential challenges and in your deployment, many people in fact use introspection that I was talking about before as a mechanism to also check revocation, because when you have a Jot token, a JWT, it’s all self-contained. So, there’s nothing indicating no way to know that it has been revoked without doing some other sort of something else. Introspection gives you a way to check back in with the authorization server to find out if it’s been revoked. It’s a whole topic with tradeoffs on its own, but the pop tokens don’t change the equation in any way. There’s nothing additional required to revoke them or to find out that they’ve been revoked. I suppose it only changes it a little bit in that the need to revoke them may be less because they’re also bound to these keys. So, a compromise of a token isn’t as serious if they’re pop or key bound because they can’t be exploited because of that binding. So, in many cases the need for revocation I guess, would be somewhat, somewhat reduced. I don’t know. I don’t want to give license to not revoking at all or two extremely long token lifetimes, but it does present additional guards against the reasons you might typically need to do that.

Priyanka Raghavan 00:44:32 Yeah, I think that makes sense. Yes. I just a little bit stump by that. Yeah, I think that does make sense. I guess now that we’ve gone through a lot of this, I wanted to use the last bit of the show to talk a little bit about the future of OAuth2. I do see a lot on something called, it’s called Grant Negotiation and Authorization Protocol called GNAP? Is that how they pronounce it? What is that, is that something that you could tell us? Is that the future of OAuth2?

Brian Campbell 00:45:02 I can tell you that I think they’ve agreed on a pronunciation that has sort of a G on the front of it. So, it’s more of a Ga-NAP.

Priyanka Raghavan 00:45:09 Ga-NAP.

Brian Campbell 00:45:10 And you had mentioned Justin earlier, having talked about OAuth GNAP is a work effort within the IETF. That is, I think in many ways, an attempt to re-envision and redesign and rebuild OAuth from the ground up. And it’s something that Justin’s been heavily involved in and pushing for. It is explicitly not OAuth and the OAuth community for whatever that is, is continuing to work on OAuth as OAuth and has stated that GNAP is not OAuth3, although it does attempt to address many of the same kind of problems. So, there’s certainly a relation there, but it is I guess, independent effort towards some of the same ends. That maybe clarifies it a little bit, but yeah, it does try to do a lot of the same stuff, but almost think of it as a ground up rewrite of OAuth, which depending on your perspective may or may not be necessary or the right use of time and resources, but that’s what it is. So, it’s not really, it’s not OAuth, it’s not an evolution of OAuth. It’s sort of a new take on OAuth from the ground up.

Priyanka Raghavan 00:46:26 So the other thing I wanted to ask you is also, I was reading about this thing called macaroons from Google macaroons tokens. Is that something you are familiar with? What is that? Is there a future in that?

Brian Campbell 00:46:39 I’m vaguely familiar with it. So probably not in a place to give you any real authoritative answer, but it’s sort of a different take on tokens as I understand it. And it allows, I think what they call caveats to be applied to a token by the user, which sort of constrain what it can do, which it solves some similar problems to key constrained or pop tokens, but also is very different in that you could like add a caveat before you send a token, which would keep the receiver of that token from turning around and using it as its full power, which is one area that pop tokens also prevent that kind of usage. But the token itself is still un-caveated or unrestricted any more than originally was in possession of that client. So, it’s not as effective as mitigating the kinds of theft and replay attacks from the client directly.

Brian Campbell 00:47:38 I know there are some people that have explored use of macaroons in conjunction with OAuth. I don’t foresee a really widespread acceptance and usage of that, but I could certainly be wrong. And they do have their place, they get used in other contexts, but they’re subtly different enough from the kinds of problems that they solve and how they do it. That I don’t know that it’s an easy jump to sort of drop them in and use them to solve these kinds of problems in the OAuth context. And for that reason, I don’t know that there’s a large future there likely though elsewhere is it’s, it’s an interesting technology that provides some valuable constructs, but their applicability here is not quite, what’s desired.

Priyanka Raghavan 00:48:24 Another thing that I wanted to ask you about the future is, also OAuth2 does different from Oauth1 that talked about need of clients. It acknowledged that, but what is going happen in the future? Are we going like start going away from all this redirects and is the protocol going change like that application they are, we just going stop seeing redirects because you’re not going be only thinking about browsers and as we go more need.

Brian Campbell 00:48:49 That’s a great question. And I don’t have the answer for sure. I will say that a lot of native applications, actually, at least these days jumping between the native applications actually occurs through browser redirects anyway, but still HTTP and HTTP redirects, where instead of running through the browser, the operating system is picking those up and based on it’s called claimed HPS and URs or other, I don’t know the exact names rather than invoking that HTTP request invokes the handling of that, sends it to the local application on that behalf. So, the constructs continue to use the same mechanisms. I don’t think it’s gone anywhere anytime soon, but we are seeing pushes from browsers to tighten up privacy, which may impact the kind of data that is shared across re-directs or can be shared. We’re seeing some momentum behind different kinds of ways to present credentials that may localize it more in ways that don’t require redirects. So that’s a lot of words to say. I don’t really know.

Priyanka Raghavan 00:49:57 Okay, fair enough. This has been great. I just want to just sort of end with maybe some advice for our listeners, more than advice. Maybe I could just say is like, how do you see this whole journey evolved in the future? I mean, OAuth2. Is there anything that you see there’s a definite direction that you see, people are thinking about stuff that might change, or do you think it’s just going be just improvements over things which are already there?

Brian Campbell 00:50:24 I tend to be sort of a, an incremental improvement kind of person. So I would lean in that direction in general, I will say OAuth2, for all its success and usage, it’s a bit of a mess. It can be complicated, hard to understand there’s some problematic things in it. And there’s a metric ton of different standards that actually comprise OAuth2 and or sort of its various extensions. So, I think that’s going continue. I think there’ll be continued to be incremental improvement work, but there is some work underway. In particular there’s an effort around defining OAuth 2.1, which is aimed at sort of consolidating some of the many specs that comprise OAuth 2.0 adding or clarifying some best practices, removing deprecated or problematic features, particularly from a security standpoint. So that’s one area of active work that’s pretty incremental, but I think very pragmatic at trying to clean up simplify and make more accessible. The stuff that we’re seeing now, but it, I mean, in general, OAuth2, it’s widely used. It continues to be pretty successful despite problems. I think that’s typical of just about any successful standard and at least in the nearest term, I think the efforts we’ll see will be continued sort of refinements and improvements around 2.1 and maybe extensions such as DPoP to accommodate more niche or, or higher value or different use cases, but nothing really revolutionary, more incremental type improvements going forward.

Priyanka Raghavan 00:51:58 That’s perfect. This is great, Brian. Before I let you go, is there a place where people can reach you? Would that be Twitter or LinkedIn?

Brian Campbell 00:52:08 I’m not great about any of that, but I think you finally tracked me down on Twitter, right? So that, yeah, that would be probably the best place to track me down. I have the interesting handle with a name like Brian Campbell it’s hard to get a unique handle in places, but it’s two underscores __B_C on Twitter.

Priyanka Raghavan 00:52:28 I will definitely add that to the show notes. And thank you so much for coming on the show. And might I add that? I feel like I’ve learned a bit and I’m thinking about APIs or services that I want to protect with the OAuth2 MTLS and I hope it’s the same for our listeners. So thank you so much.

Brian Campbell 00:52:46 Oh, you’re more than welcome. Thanks for having me on. And I do hope it’s been somewhat informative and not too boring or too much minutia. It’s hard; we get into the weeds with some of this stuff. I appreciate you saying that.

Priyanka Raghavan 00:52:58 Yeah, this is great. Thank you. And this is Priyanka Raghavan for Software Engineering Radio. Thanks for listening. [End of Audio]


SE Radio theme: “Broken Reality” by Kevin MacLeod (incompetech.com — Licensed under Creative Commons: By Attribution 3.0)

Join the discussion
2 comments
  • With OAUTH2 MTLS protocol, can you give access to an App, for example one client downloaded an app, (after payment), and he authorises to give access to another client, what will be the security and implications for App creator? Otherwise it a great idea, good talk, listening to SE radio for the first time

More from this show