In this episode, host Nikhil Krishna speaks with John deVadoss of NEO Global Development in Seattle about his previous work in the .NET Patterns and Practices and Azure teams at Microsoft. They dive into the software design philosophies that drove these large software development efforts, including the loose coupling approach that was adopted when building .NET. John introduces an interesting mental model, called “Fiefdoms and Emissaries,” which was applied in Azure development where the concept of a fiefdom was used in determining the boundary context for services. The discussion explores how this philosophy should be applied to service interfaces, which deVadoss recommends should be versioned rather than changed, and then considers the concept of an Agent, which is a type of Emissary, as contrasted with proxies. Finally, they discuss service orchestration and the challenges of dealing with errors, compensating actions, and rollbacks.
- Autonomous Computing (short version)
- Episode 520: John Ousterhout on A Philosophy of Software Design
- Episode 495: Vaughn Vernon on Strategic Monoliths and Microservices
Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.
Nikhil Krishna 00:00:17 Hello and welcome to Software Engineering Radio. This is your host Nikhil Krishna. I have with me today John deVadoss. John leads development for the NEO Global Development Practice in Seattle. He previously worked at Microsoft, and he built the Microsoft digital.NET Patterns and Practices and the .NET architecture strategy groups. He also incubated Microsoft Azure when he was at Microsoft. John launched two machine learning startups, but more recently, he’s a board member at the Global Blockchain Business Council and the founder and head of development at NGD Enterprise focusing on building blockchain development tools. John did his PhD work in machine learning, specializing in recurrent neural networks. So welcome to the show, John. Great pleasure to talk to you. Is there something in your bio that I missed that you’d like to add?
John deVadoss 00:01:15 Thank you, Nikhil. Thank you. First of all, I want to say thank you for giving me this opportunity. Thank you for giving me time, and thank you for the very kind introduction. No, I think there was enough in there, so I will not add any more. Thank you.
Nikhil Krishna 00:01:30 Cool. So, the topic for today is an interesting one. It’s about design philosophies, and as we discussed in your introduction, you’ve worked in pretty large enterprises on, in pretty large departments. Obviously, there are design philosophies that you have encountered and embraced. And one of the ones that we are going to talk about today is some of the design philosophies that drive .NET and Azure development. So, one underlying pattern that you mentioned was the concept of loose coupling, which you said was one of the driving patterns for the .NET and Azure platform engineering. Perhaps for the audience, could you tell us what is loose coupling in your mind, and what is the advantage of loose coupling and why you adopted it in Microsoft?
John deVadoss 00:02:19 Certainly Nikhil. Yes, thank you. I’ll start with a little bit of history, historical timelines, going back perhaps to the mid to late nineties, the early days of service-based or SOA architectures and kind of the early, very early days possibly of even what you might call cloud computing. At that time systems primarily were built in a monolithic fashion, monolithic meaning sort of single system, single app, up and down all the way. What we were trying to do with .NET back in ’99, early 2000, even long before it was called .NET, was to break away, to first chip away and break down these monolithic systems. The idea that systems had to be so tightly interconnected, we believed, and of course now we know in retrospect — you know, at that time was very early in the game — that systems fundamentally should be compositions of services. And the idea behind loose coupling was that these services should be autonomous in their own right.
John deVadoss 00:03:30 And so, if services are autonomous in their own right, then what is an application? An application essentially becomes a composition of one or more of these services. And so, loose coupling as the underlying principle, almost the primary principle, if you will, evolved in the early days of .NET. And of course, it continued to flourish through the Azure incubation, through the Azure development as well. So, I would say much of the Microsoft Dev platform and tools really kind of shifted from a mindset perspective away from monoliths to autonomous services. And of course, loose coupling, you could say was the primary design and architectural philosophy.
Nikhil Krishna 00:04:19 Okay, that’s a great overview. I appreciate the historical context as well. But in your endeavor or in your journey towards this particular architecture, could you talk about a little bit about the challenges that you might have faced or the pushbacks that you might have faced? And maybe also comment about the constraints of the system, because every system obviously has advantages and constraints, right? So, maybe you could talk a little bit about that.
John deVadoss 00:04:47 Absolutely. Yes, and I think you make a very, very good point. The challenges, the constraints certainly, but also the enablers. The challenges certainly, you know much in terms of the mental models, so much mindshare, so much engineering, so much developer share has had been focused on single-system monoliths that even the idea, the notion that somehow we could build applications differently, that they could be compositions of services, was an alien concept. And the first, I suppose the obvious question was why, why do we even care? And, and remember, this was about 20 plus years ago. It was still very early in the days of the internet. And the notion that entities would collaborate, would cooperate across the internet was still lacking, right? And that was kind of really the “aha!” as more and more people saw the possibility of connecting across and over the internet until then, the intranet was the king.
John deVadoss 00:05:51 And so you could argue, why do I care about services, about loose coupling? But once you broke this sort of, this abstraction that connectivity was going to be over, the web, collaboration was going to happen over the web, and interconnectedness, interoperability was going to be a driving imperative, I would say that’s really what drove the, the shift. In terms of challenges, the interesting piece, Nikhil, was there were some people who took it overboard, sort of went extremely overboard. And they said everything is a service, everything, right? And so, very interesting as I think about it, right? You had this sort of almost what you might call a microservices philosophy. Everything, every component people assume would become a service. And of course, what happened then was that composition becomes very expensive, right? So there has to be some fine balance between how you define the boundaries of a service.
John deVadoss 00:06:50 And so, if the assumption is that everything, every component, even every object becomes a service, then the tax you pay in terms of crossing boundaries, in terms of crossing, you know, obviously between barriers of trust right? Becomes quite expensive. And then the compositions of these services become, you know, sort of collapse under their own weight. And so we actually saw, in fact, I remember, I will not name names, but I remember many applications — within Microsoft, but also outside in terms of the industry web — people would call me and say, John, you know, this isn’t working. And we’d go in with, I would say, look, you have 74 services, , and for every customer use case, if you going to invoke and compose across 74 services, this is not going to work for you, right? That was the extreme, you know, sort of going overboard on the, but along the way, this notion that applications were not going to be your father’s applications single system where everything kind of lived within maybe even one physical infrastructure, right?
John deVadoss 00:07:58 It’s a loose coupling in terms of services. But the interesting piece also, Nikhil was the realization, the growing, understanding, awareness that loose coupling was also happening at the hardware level, right? Infrastructure architecture versus application architecture. So loose coupling began at the application architecture level, but eventually sort of moved into the infrastructure space. And so, the notion of abstractions, of virtualization, which eventually led to the idea of cloud computing, also as loosely coupled abstractions of infrastructure as opposed to applications, right? So in some sense, you could argue really this notion of coupling boundaries, of barriers, of clarity, of communication, interaction, really drove the application architecture, which is .NET, but also the infrastructure architecture, which was eventually Azure,
Nikhil Krishna 00:08:57 Right. So obviously you illustrated very clear nicely the, the challenges of going overboard on one side, right? So, then the immediate question that comes to you is then, okay, how, how does one know when you’ve gone overboard, right? Yes. What is too much, right? So how do you develop that sense in this particular …?
John deVadoss 00:09:17 No, that’s a really good question, and I’ll tell you. So this is what led to us to develop this mental model, this abstraction that we called “fiefdoms and emissaries,” Nikhil.
Nikhil Krishna 00:09:29 Right?
John deVadoss 00:09:30 Fiefdoms and emissaries know very intentionally we used archaic words, archaic terminology, right? What, what is a fiefdom? It’s kind of like a kingdom, a medieval kingdom. What is an a emissary? An emissary is a messenger who helps to transmit messages across fiefdoms, right? And this notion of thinking about fiefdoms was that really services at the right level of abstraction, at the right level of granularity, ideally would be fiefdoms, meaning there had to be a need to establish a boundary. There had to be a need to establish a border. And there had to be a need to think in terms of messaging as the primary mechanism in terms of communication. And if, really, the service abstraction did not qualify in terms of this notion of a fiefdom, perhaps this was really a component, or maybe even an object, but certainly not a service. And it helped kind of to establish this, this thinking around the design patterns of, like you said, what is the right level of granularity, right? And of course, there was no science, and I would say even today, there is probably no science, but this notion of established boundaries and the need for boundaries, like a moat — you know, fiefdoms had moats around them, right? Does your service really need the abstraction of a mode? Then maybe perhaps it is a service. If not, then, you know, maybe go rethink and, and is it really an object, or even a class, for example, right?
Nikhil Krishna 00:11:06 Correct. Yeah. That’s a great thing. One of the ways that I have kind of thought about this was around the idea of bounded context, which is a concept that comes out of domain-driven design, right? Yes. Yes. A very famous book from Eric. So obviously that is easy from a business perspective. So when you, when you’re building a business application, like for example, um, building a application in the logistics space, the domain is kind of clear. You can say, okay, fine, these are the concepts that are part of that domain. So you can kind of try, okay, create a context around, I don’t know, an airplane and context around a ship or whatever, right? But I would imagine that that’s kind of different when it comes to platforms like Azure and .Net, right? Because you’re kind of not, it’s not, it’s not a business, well, it’s not an application per se, it’s something that’s supporting in application. So how, how did you kind of think about that, for example, for Azure? How, how did you come up with the domains for it?
John deVadoss 00:12:06 So, brilliant question. Absolutely brilliant question. And you’re right, in terms of domain-driven design and, and the famous book and sort of the yeah, the implications, right? And the corollaries of how to think about, you know, domain objects and such. So for us, at least in terms of our thinking, I would say I was influenced significantly by this economist called Coase, Nikhil, Ronald Coase. He was a Nobel Prize winning economist. And he worked on this notion of the theory of transaction costs, right? And the question he asked was, why do corporations exist? Why do business entities exist? Why not just be a free-for-all, a true market where everybody can transact with it? Anybody else as, as they choose to? And, and his thesis was that businesses or sort of corporate entities or things, if you want to call them congregate into one larger entity in order to minimize the transaction cost, the cost of transacting.
John deVadoss 00:13:06 And where it became less expensive to, to sort of take it outside, then clearly that piece would go off as offshoring or outsourcing or something else, right? And so we ad a very similar economic model saying, at what point do the transaction costs of spanning these boundaries really cross the threshold? And at what point does it actually sort of still fall within the threshold where we can say, in terms of abstraction, so is boundaries, this is acceptable. And that’s why obviously at a very high abstraction level, compute versus storage or storage versus bandwidth, right? Or, or bandwidth, so and so on, right? Of course, that’s a very high level abstraction, but you can speak
Nikhil Krishna 00:13:50 Absolutely. Yeah.
John deVadoss 00:13:51 One more level and having this economic lens, because I would say for the very first time, this ability to look at technology with an economic lens, I argue, really happened because of services, because of loose coupling.
Nikhil Krishna 00:14:06 Very interesting approach. Yeah. I can kind of see that, right? In terms of economic cost of boundaries. But obviously there has to be kind of like a measure or a way to kind of standardize that, because obviously when you’re talking about storage, that’s a different dimension. And then compute is a different dimension. Bandwidth is a different dimension, right? I’m sure there were probably some kind of back and forth and some kind of calibration required to kind of get to that common economic measure that kind of reveals set, right?
John deVadoss 00:14:41 Yes, absolutely. But you’re, you’re absolutely right. There was a lot of debate, a lot of discussion, a lot of, how do we say it contention, and very hard contention as well, because clearly the infrastructure that we were building, the expectation was that millions, you know, tens of millions of developers would be using it. And so, the flaws, if there were any fundamental flaws, especially economic implications, could be quite expensive, you know, for the application developers on top. So you’re right, in terms of the lens of the economic cost. Also, I would add, you know, again, this, this mental model of fiefdoms and emissaries really helped us. And if you’ll humor me, I’ll kind of give you story there as well, right?
John deVadoss 00:15:30 So what happened is that, you know, if you, if you buy into this notion of fiefdoms or kingdoms, and emissaries, right? The first question that we asked ourselves is then the stuff that’s inside the fiefdom, the data, if you will, how do we think about the data? And this led to very interesting discussions, debates on what is master data versus what is reference data, right? Versus what is single-user data? That was the first big debate, so thinking about service abstractions, you know, what kind of data does the service possess or own, right? The second thing was then if services own data, then in terms of communicating across systems, across services, how do we communicate? And this led to the discussion of messaging, right? And what are the kinds of messages, obviously, you know, idempotency comes in, you know, what happens if you lose a message? What happens if you get it three times the same message? Right? And so obviously if you have messages, the third one is, you know, what is the identity of the messenger?
John deVadoss 00:16:29 And so the claim, what are the claims, the tokens, that the messenger has of possessors to be able to transmit the message, right? And the next thing obviously then if’s thinking about messaging, of course, you could lose messages, there could be errors. The messenger could be corrupted. Maybe the messenger could be, I guess, not to be politically correct, but the messenger could be obviously sophistication and so on and so forth. So how do you ensure that you have a certain level of error-correcting messaging, which then led to business process orchestration and sort of conversational messaging, right? So really it goes back to, I would say perhaps this not very well known mental model of fiefdoms and emissaries, but almost everything kind of really, you know, sort of came out of that model, like I said, in terms of data, in terms of identity, the messaging, the error correction, you know, fault tolerance, and of course the business process orchestration and more, right? And then we found ourselves unable to precisely delineate these components for a service. Then the obvious implication was perhaps this is not a service, right? Perhaps in terms how we articulate, and maybe we should go back to the board, right? That was kind of a, I’m giving you a very sort of summarized perspective, but hopefully
Nikhil Krishna 00:17:53 No, absolutely. Yeah, yeah, yeah. So, no, but that’s, that’s, that’s very useful. And I think it’s interesting to me that the exercise of kind of thinking about this as a kind of a fiefdom or a kingdom and kind of like looking at the costs of the transactions going out and inside the kingdom and trying to figure out whether this is, that exercise in itself was so illuminating in terms of defining whether this is actually truly a fiefdom or a service, or not, right? And that’s, that’s, or
John deVadoss 00:18:22 …or a component, or even an object or a class
Nikhil Krishna 00:18:24 A class. Yes. Or object or a class. So in your experience, does this actually work across levels, right? So we were talking in terms of very high abstractions of compute and bandwidth and storage and what have you, but can you then take this concept of FMS and MSS and take it down like within the service and say the organization of the service itself, or does that kind of like, is what is the kind of like breaking point beyond which basically you probably is not a good abstraction to use?
John deVadoss 00:18:55 And that’s a, that’s a really, really good question. That’s really good. I think it’s a very pragmatic question as well. And I’ll tell you sort of our debates and discussions and the contention — the philosophy, if I could use the word philosophy — that we took, Nikhil, was to lead with service interfaces, meaning that much of the discussion, the debate, the architecture, if you will, was focused on the service interfaces. Why? Because the thinking was that if we went with the implementation first, that we might go too far into this to be able to backtrack and then to fix things. Whereas, if we could agree on the service interfaces, then to your point, like you said, we retain the flexibility to be able to recompose or decompose service interfaces. So we could say for purposes of expediency, even though it might violate, you know, sort of the architectural principles, for reasons of expediency or for reasons of economic costs, we could collapse and say we’re going to have a, a meta service interface that brings together these, these sort of, you know, second-level service interfaces.
John deVadoss 00:20:03 So that was the compromise, you know, sort of we made to be able to lead. Right. And the challenge that I saw and sometimes still see in practice is that leading with the service implementation does not give you that flexibility to be able to roll back, right? Or to roll forward mm-hmm. to be able to correct. Whereas having the interfaces, you can make trade offs and yes, it might not be orthodox, it might not be pedantically correct in terms of the philosophy or the principles. However, like you said, in practical terms, it gave us the ability to make the trade-offs. And that’s why when you look at net or even Azure, the apps, you might say, what were those people thinking? ?
Nikhil Krishna 00:20:42 Yeah,
John deVadoss 00:20:47 ,
Nikhil Krishna 00:20:48 No. Yeah. I would sometimes not even as Azure, right? Sometimes even in the .NET, yes. APIs, there are some APIs that are like more intuitive than other APIs to put it
John deVadoss 00:20:58 Polite. You going to say, what, what was guy thinking? What was, what was the logic? Right? And you might have been swearing as well, but
Nikhil Krishna 00:21:08 It’s a great point in the sense that business realities are business realities. And, but, so when you say you defined the interfaces and you looked at the interfaces first, does that mean that once those interfaces were defined, they could not change? Or is that more like a okay, we want to work outside in, but what happens if you go in and then you realize, okay, some assumption you made when you define the interface is not working out and you really need to change the interface?
John deVadoss 00:21:41 Yeah. So again, a superb question Nikhil. Absolutely. I think this is a, this is a very, very critical question. So two things I would say. Firstly, try to maintain this principle of once you publish a service interface, stick with it. Do not change it, but version it if you have to. So version it as a 1.1 or 1.X or even a two, if you wish. That’s one, being able to maintain the integrity of the contract that, that you have published, right? The second thing was, which I was, I think I hinted at a little bit earlier, being able to compose and have higher-level service interfaces and sometimes lower-level. So still maintain the integrity as far as you can of what you have published, but for the reasons of expediency, and often actually for, for reasons of developer experience in terms of syntactic sugar, right?
John deVadoss 00:22:37 to make life a little bit more easier, more productive, be able to have compositions, meta service interfaces. But also the last thing I would say is being able to create proxies or agents, right? So those were the, the, in some sense, sort of a second level emissary for the service, where it enabled the consuming developer to be productive, and to make life easier for him or, or or her in the process. So we would say versioning, you know, being able to compose up or down in terms of the meta interfaces. And then if those didn’t work, to be able to publish proxies or agents, which obviously, you know, while they were still working with those published service interfaces could, , and sometimes make life easier.
Nikhil Krishna 00:23:23 Could you kind of just kind of elaborate a little bit or maybe define what you mean by a proxy or an agent? Because a proxy there are different when you talk to different people what a proxy is. It’s a different answer every time, right? So yes, from your context, what do you mean when you say a proxy or an agent?
John deVadoss 00:23:41 So, no, certainly, absolutely. No, you’re right. A proxy, at least for this discussion, and certainly what we had in terms of the philosophy is this entity that, that you would communicate with in the place of the actual destination, in the place of the actual, you know, sort of destination service. And so again, in terms of a mental model, it was an emissary mm-hmm. It’s an agent, right? It’s an agent that you could talk to, and the agent could then actually communicate to the, , the service, you know, the, the actual destination, not that are multiple thinking about it, right? A proxy. Yeah. Be a wrapper, right? And sometimes in the tooling, you know, the tools will basically look at an api, like a Postman API and say, okay, I’ll give you a proxy. And the proxy, really what it does is it gives you, let’s say, a rust, you know, sort of client library or a, or, or a C Sharp or, or a Python and so on mm-hmm. ,
John deVadoss 00:24:35 That’s basically really sort of the most, I would say, lowest level proxy, which is just a sort of an API wrapper. But the proxy or agents, the agents in particular for us, actually went into a higher level of abstraction where, for instance, in terms of being able to, to provide reference data to the consumer, the agent had the ability, right, to store a local copy, which obviously was timestamped and verified with the actual service as an example. Another one was in terms of messaging, to be able to do retry, to be able to do error handling to a certain extent. Where in this case you could say, and again, I, I don’t wanna use the word intelligence, but certainly the proxy, the agent has more intelligence than just a plain vanilla wrapper proxy, right? And those interesting, because then those are truly the emissaries because they have some level of autonomy.
John deVadoss 00:25:31 It’s not just the service having autonomy, but the agent itself also has some level of autonomy. And then truly, truly, then you have this emissary. So I suppose you are right, I use the word loosely, and I shouldn’t be using it loosely because proxies, you know, sort of all the way through to agents, there is a difference in terms of, , the level of, , intelligence, but also the level of autonomy. And ideally, agents truly are autonomous. And so, no, you’re right. And it’s a very thank you. Thank you for asking the question. Thank you.
Nikhil Krishna 00:26:01 Yeah, yeah. No, no, this is quite fascinating. So obviously one of the constraints, or one of the challenges again, at least in my mind, when it comes to service-oriented architectures, or you know, this emissaries and fiefdoms context where you have multiple boundary contexts, is that oftentimes you have business requirements that basically require you to either sequence or kind of orchestrate multiple services together. So like for a classic one would be like a transaction, right? So it would be, you need the order to be created as well as the inventory to be lowered. And, and basically you want a notification to be sent and you need a, I mean, there are like three or four things that you want to happen together, and then if one of them fails, you want it to kind of all roll back. And how is that concept actually dealt with in fiefdoms and emissaries?
John deVadoss 00:26:57 Oh, so again, thank you. This, this is a real really good question. So first off, I wantto say something which I’m sure you’ll know, we all agree, which is in the world of loosely coupled systems or distributed systems, you know, life in the happy path is beautiful.
Nikhil Krishna 00:27:14 Absolutely. Yeah.
John deVadoss 00:27:16 We know this, right? Anytime something breaks, there’s an error somewhere, bebugging is a nightmare.
Nikhil Krishna 00:27:23 Absolutely. I sometimes think, you know, some of the DevOps roles and the SRE roles are paid just because of that particular problem.
John deVadoss 00:27:34 Yes. It’s a nightmare, right? Because yeah, you know, you can’t make head or tail of this thing. Where do you start? Where do you stop? Right? So the reason, again, we liked, or we used the emissary mental model, Nikhil, was to be able to think in terms of the lowest level of obstruction of communication, which is a message mm-hmm. . And even there, you know, it could be a one-way message, could be a file and forget, could actually be a request response, but still breaking it down to the most fundamental, which is just one message, right? Mm-hmm. . And so this helped us to think through then, what is a conversation like you said, you know, any kind of meaningful business activity is going to be a composition of multiple services, some kind of an orchestration or even a, a conversation if you will. So being able to map those lowest-level first-class messages and then be able to, to define, to delineate the orchestrations, the conversations helped us to think through in terms of error handling.
John deVadoss 00:28:34 And really that was the primary sort of focus for us was how can we make it easier? It is never going to be too easier. Of course, we know that, but how can we make it easier for the developers, for the application architects to be able to correlate? Obviously correlation is a critical piece here, right? How do you know this error goes back to these three? You know, so you, you know that, right? So, so being able too deal with messaging at that level of abstraction, I think really was pivotal. Otherwise, if we had come top down thinking through the orchestrations, then it’s an all or nothing, right? You have to think of rollback, right? Rollback in terms of the whole mega conversation, whereas thinking in terms of emissarries of messages, you can, and oftentimes you do rollback in terms of messages, which is a lot less expensive in terms of time, in terms of money and other factors as opposed to thinking of rollback as an all or nothing, which in the world of business becomes extremely prohibitively expensive. Right?
Nikhil Krishna 00:29:36 Right, right. This is an interesting idea. So essentially what you’re saying as far as I understand it, is that we focus on making sure that the messaging between the services itself is sufficiently well-defined to be able to handle all the error cases. And then so that when you actually have maybe a client that sends a sequence of actions to be taken, or a, you call a meta service that calls multiple other services, there is enough information in the responses to be able to take corrective action in case of an error.
John deVadoss 00:30:14 Absolutely. You’re absolutely right to think of a conversation as a composition of messages rather than as a first-class thing by itself, but really composition of messages, and like you said, it helped us then in terms of both the developer abstraction, the tooling, but also in terms of the conversation abstraction to deal with it, like you said, to localize it at the messaging level, right? And our thinking was, if we can get the messaging abstractions right, then the developers obviously, and, and eventually, you know, you had things like, if you remember back in the days, orchestration biz and SQL service broker and so on, which were all dealing with abstractions the higher level. And because the underlying infrastructure more or less well defined, it made our life easier when we went on to build a best stock or a service broker and eventually things like Azure as well in terms of orchestration. Yes. So very good summary, thank you.
Nikhil Krishna 00:31:08 Right. Okay. So again, just to kind of push on that a little bit more. So obviously when you get supposedly take a concrete example of, okay, five actions need to be done right in the sequence of 1, 2, 3, 4, 5, and then you as a client, I sent one goes through two, goes through three fails, four goes through, and five fails. So you have two failures in between, right? So as a client, what is the general thinking in terms of what is the compensating actions to be taken? Was the general idea that, okay, you go back and just individually roll back each one of them in whichever way, right? By maybe doing the opposite action of what you were supposed to do. Or is there, was there kind of like some kind of common thread that we could use to kind of say, yeah, this particular set of actions worked, but these others did not? So that kind of thing. So was there, I mean, I, I can kind of see it from the .NET or as your perspective that okay, you might think that, hey, that’s, that’s one level too high. That’s actually, that’s the problem of the customer, right? It’s the problem of the business developer and it’s varies by business. So we provide the tooling, but they’re on their own. But I was kind of like, you know, since you were also heading prac patterns and practices, I was kind of like curious to see what you’re thinking on that was.
John deVadoss 00:32:34 Yes, no, thank you. You’re right. Yes, I’m certainly, if you liked patterns and practices, I can take some credit. If you didn’t like it, I can, you can give the blame as well. I spent many years yes building things like enterprise library, the cab and so on. But you mentioned that a very interesting concept there. You spoke about compensation, and I think you’re right, right? In terms of, again, another abstraction that is so critical, you know, to the underpinnings of services and loose coupling and, and even cloud-based architecture is compensation, being able to compensate. So I would say adding to that is correlation. So for us, in terms of being able to correlate, whether it be at the infrastructure level, whether it be at sort of the application level, or even as you said at the business component level, ensuring the tooling and the developer experience for correlation was critical.
John deVadoss 00:33:27 In terms of compensation, and again, you might say this was a good or a bad idea, but in terms of compensation, the philosophical underpinning was based on emissaries, Nikhil, was on messages: Be able to say, what are the boundaries of this emissary, this message, and think through, is this idempotant? For example, what happens if this guy shows up twice, right? Mm-hmm. How do you, do you deal with it or the guy doesn’t show up at all, , then what do you deal with it? Right? Right, right. Able to compensate at that level. Maybe it was a little bit of us taking the easy way out because, you know, at that level, like you said, you can define, you can prescribe and you can enforce, but of course, as you go up these abstraction in terms of applications, it gets, you know, more and more challenging and certainly much, much more convoluted. But we felt like if we could establish the boundaries, the barriers at that level, at least then there was a solid footing, a solid infrastructure, which I think in retrospect, you know, we can debate the right, the wrong approach. But that was what we were thinking.
Nikhil Krishna 00:34:25 No, no, absolutely. Yeah. I mean, like they say, right? Hindsight is 2020, and I’m sitting here and I’m sitting here in 2022 and critiquing things that were built in 2000, right? . Yes. Yeah. So I’m not, actually, at the end of the day, it worked really well, right? I mean, the proof of the pudding is in the eating and Microsoft is a, Microsoft is a trillion dollar company for obvious reasons, right? . So one aspect, also, traditionally, one of the usual arguments that some risk people bring up when you talk about service oriented, I mean, loose coupling and service oriented stuff versus monoliths, right? Is that, well, monoliths they say, it’s simply faster, right? Because everything is in the same memory space, you’re calling things immediately and it returns very fast. So, given that you are kind of like using this for something like Azure, which is the infrastructure core storage and compute and all of that, did you ever worry about the performance aspect?
John deVadoss 00:35:31 No, it’s a, again, a really, really good question. Absolutely. No, you’re right. You can certainly argue you can make, make the case and probably it’s started true in, in a good number of scenarios, use cases where actually where single system monoliths, you know, in terms of response time, possibly latency, could certainly be much, much more responsive than perhaps a, a loosely couple system that is a composition of services. Absolutely. The two bets that drove our thinking firstly was that in terms of bandwidth, in terms of latency, that, you know, it was going to be a game where connectivity is going to, was going to improve in leaps and bounds, it was going to be less and less of a challenge over time, which again, we can discuss, you know, obviously that’s right or wrong. The second thing was, whilst the cost, the expense in terms of a single transaction versus the cost of maintaining and upgrading and versioning these systems, obviously with a single system monolith, as we know, being able to, to version, to upgrade, to, to maintain, obviously the whole thing may have to come down, stop for some time, you know, and the dependencies, because just being able to trace through the dependencies across the set of components, objects, classes, versus having very well-defined boundaries and being able to say, look, I’m going to move version this system, which is critical for us in terms of thinking through the early cloud infrastructure because we just could not stop it, right?
John deVadoss 00:37:00 And, and of course you could argue, in fact, , this actually is a good segue in terms of blockchain-based architectures because, and, and this might be controversial, but I’ll say it anyway, right? Much of what the blockchain world calls decentralized, really Nikhil, is federated , right? Yes. Much of this world is not really decentralized, right? , it it is actually to be polite, to be nice to them. You can say it’s federated and, and what is federated. It’s basically, you know, a loosely coupled collection of systems that are cooperating towards, obviously. So again, it’s a fine line, but I realize basically this evolution in terms of, to your point, the cost, the expenses of consuming versus the cost, the expense of, of upgrading just for as an architect, right? When people say, you know, what is a good architecture? Obviously there are many ways you can slice and dice it, but for me, the how you measure architecture, Nikhil, is how evolvable is it, right? Mm-hmm. and, and of course the economics of evolvability are critical, right? You can obviously change anything but , but being able to evolve in an economically feasible, viable manner is how I define architecture. So for us, this notion of being able to evolve this system of services almost independently, those benefits significantly outweighed any with respect to, like you said, the potential of decreasing perhaps, you know, response times, latency, because the bet like we made is that technology was going to improve connectivity bandwidth, obviously, and it’s still obviously still very much improving, right?
Nikhil Krishna 00:38:37 Yeah, yeah. Yeah, it’s a good point. So since you’ve brought up blockchains, right? There’s this one question that I had, and then basically maybe you can clear it. When you talk about, you know, fiefdoms and emissaries and all, it’s this, this whole kind of, it gives you this mental picture of medieval Europe and kingdoms and all of that. And there is, in blockchains, it’s also this concept called the Byzantine consensus, , Byzantine general, general problem. Are they in anywhere related, or is this kind of like just a flight of fancy in my mind?
John deVadoss 00:39:11 No. No. So, so Nikhil, look, you are the first person who has made this connection. I’ll tell you. You’re the only person who’s asking this question, .
Nikhil Krishna 00:39:24 So then obviously it’s a flight defense in my mind. No, no,
John deVadoss 00:39:27 No, no, no.
John deVadoss 00:39:29 So this is genius because you’re absolutely right, right? What the connection you’re making is absolutely the connection. So yes, because Byzantine fault tolerance, you know, Byzantine, like you said, is, is a collection of generals. And, and the question is how do they cooperate, right? Because they don’t trust each other. And the notion of fiefdoms was basically there was a lack of trust or, or zero trust, if you will, or some degrees of trust. So you’re absolutely right that the bft, which by the way, you know, is quite prevalent these days in terms of blockchain based systems goes back mm-hmm. very much the same or similar mental model of fiefdoms and emissaries, of course, in terms of the, i I would say, the fiefdom model, the question was more around how do we ensure we have the abstractions for constructive collaboration, whereas in the Byzantine scenario, it’s more a question of there is no trust, and how do they even get to some level of consensus with respect to anything has being the focus of bft. But you are absolutely right, and I’m glad you, you’re the first and the only person who’s made that connection as far as I know. Thank you.
Nikhil Krishna 00:40:35 Yeah. Cool. I mean, I think I’ve kind of covered, and I think we’ve had a discussion about most of the topics that I had wanted to talk about in this episode. Do you have any things, do you feel that there’s anything that I missed that you would like to just talk about before we close?
John deVadoss 00:40:51 Thank you. First of all, I wanna say thank you. Look, it’s been a, you know, a true pleasure to be able to discuss the philosophical underpinnings, if you will, of Azure, of .Net, and to go through the abstractions, because oftentimes these are forgotten or glossed over. The only thing if, if I might add, would be I believe, and this is more of a conviction, that much of the ideas we discussed in terms of loose coupling composition, the notion of fiefdoms or Byzantine generals and emissaries or agents also applies I think at the economic level. Now, we discussed about the infrastructure and the application architecture level. I believe that more and more enterprises need to think through an economic architecture. They lack that today. I think in three years, in five years, we will see enterprises have an economic architecture, and I think the concepts very much drive those as well. So I would say it’s the only thing that perhaps, you know, we could talk about some other time, but this notion of enterprise economic architectures is very close to my heart. And that’s the last piece I would say of the puzzle in terms of my own thinking.
Nikhil Krishna 00:41:58 Thank you for that, John. I think that was a great end to the episode. Leave them wanting more, as they say, on economic architectures. I had a great time talking to you, John.
John deVadoss 00:42:07 Thank you, Nikhil. I, true privilege and a pleasure, actually. So thank you for the wonderful questions. And I look forward to having more discussions. So once again, very grateful for the time. I know you’re a very busy man and very grateful for this window of time as well. Thank you.
Nikhil Krishna 00:42:21 Sure, no worries.
[End of Audio]