Amey Desai, the Chief Technology Officer at Nexla, speaks with host Sriram Panyam about the Model Context Protocol (MCP) and its role in enabling agentic AI systems. The conversation begins with the fundamental challenge that led to MCP’s creation: the proliferation of “spaghetti code” and custom integrations as developers tried to connect LLMs to various data sources and APIs. Before MCP, engineers were writing extensive scaffolding code using frameworks such as LangChain and Haystack, spending more time on integration challenges than solving actual business problems. Desai illustrates this with concrete examples, such as building GitHub analytics to track engineering team performance. Previously, this required custom code for multiple API calls, error handling, and orchestration. With MCP, these operations can be defined as simple tool calls, allowing the LLM to handle sequencing and error management in a structured, reasonable manner.
The discussion reveals how LLM capabilities have evolved to enable MCP’s success. Desai argues that MCP wouldn’t have succeeded with earlier models like ChatGPT 3.5, but improved reasoning capabilities in modern LLMs make them effective orchestrators. He presents the controversial view that hallucination should be treated as a feature rather than a bug, enabling LLMs to explore solution spaces more creatively when solving complex multi-step problems.
The episode explores emerging patterns in MCP development, including auction bidding patterns for multi-agent coordination and orchestration strategies. Desai shares detailed examples from Nexla’s work, including a PDF processing system that intelligently routes documents to appropriate tools based on content type, and a data labeling system that coordinates multiple specialized agents. The conversation also touches on Google’s competing A2A (Agent-to-Agent) protocol, which Desai positions as solving horizontal agent coordination versus MCP’s vertical tool integration approach. He expresses skepticism about A2A’s reliability in production environments, comparing it to peer-to-peer systems where failure rates compound across distributed components.
Desai concludes with practical advice for enterprises and engineers, emphasizing the importance of embracing AI experimentation while focusing on governance and security rather than getting paralyzed by concerns about hallucination. He recommends starting with simple, high-value use cases like automated deployment pipelines and gradually building expertise with MCP-based solutions.
Brought to you by IEEE Computer Society and IEEE Software magazine.
Show Notes
- The Missing Links in MCP: Orchestration and Runtime Execution at Enterprise Scale
- Github: jlowin/fastmcp – The Fast, Pythonic Way to Build MCP Servers and Clients
- Build an MCP Server: Model Context Protocol
Transcript
Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL
Sri Panyam 00:00:18 Hello, this is Sri Panyam for Software Engineering Radio. Today with me I have Amey Desai, CTO of Nexla and we’ll be talking about the Model Context Protocol or MCP. Welcome to the show, Amey.
Amey Desai 00:00:32 Hey Sriram, glad to be here.
Sri Panyam 00:00:34 Before we dive in, you want to tell us about yourself and Nexla?
Amey Desai 00:00:38 So I am Amey, I am CTO of Nexla, and next slide is a data integration platform that allows anyone to move, connect, transform data using no code as well as low-code processing over the last 18 months. We have also built agent DKI systems using MCP as well as not MCP to help people with the data problem and specifically the data engineering problem. And before Nexla I worked at both kind of small and big companies, but I’ve largely been with let’s say, data systems as a problem for the last part of a good decade. Did machine learning before it became deep learning and now before it became kind of let’s say LLMs.
Sri Panyam 00:01:20 Thank you Amey. Exciting. I’m glad you mentioned MCP because that’s what we’re talking about today. Let’s dive in. In last 18 months, two years LLMS have taken over and MCPs the latest thing. What actually resulted in today’s world’s MCP and why MCP? What is MCP?
Amey Desai 00:01:37 Good question. To start off, I think what happened is post charge GPT having its aha moment, everybody wanted to use it for pretty much everything and that was the promise of the ChatGPT because you could have a simple UI where you could ask it questions and it would do impressive things, whether they were right or wrong. And one of the key things then people as they started incorporating Large Language Models into software, just building software, there was a lot of scaffolding. The spaghetti code that started getting written when integrating to variety of different data systems, tools, APIs, call it. And Anthropic came out with Model Context Protocol, protocol being the key word for how LLMs should connect to external data sources and tools. So it’s a standardization that I think MCP brought to the table and that helps a lot in I would say, software engineering and agent system today. And that’s not just relevant to MCP, but a few other protocols that have come out. But MCPI would say is the cleanest protocol and the simplest protocol that we have today.
Sri Panyam 00:02:43 So before MCP, how were these integrations happening? I believe there were tools. What was the state of the world there?
Amey Desai 00:02:50 I think what was happening before was people were using a variety of open-source packages as well as REST APIs and quite a few, LongChain, Haystack, Llama index whatnot. And using those packages to writing the LLM part of the code, which there is the, I would say the sexy part. And then there was, I would say the harder parts, which is how do I communicate to a variety of different data systems. So you would go and explicitly write code for talking to APIs, managing credentials and all of those details. And how do you want to also embed those APIs with the LLM calls that the frameworks would allow? This was making the problems I would say that people are trying to solve with LLMS a little bit harder as now you are spending a lot more effort on just the integration part rather than the actual solving part of business logic. This ended up creating a lot of, I would say, bad code, hard to test, hard to maintain and hard to reason through things, which is where I think MCP at least allowed you a structure for how to do it. And I think that’s the difference right now between MCP versus what was there before MCP is that somebody can follow a structure to solve the same problem. It still has its own pitfalls, but at least you start from a much better setting rather than a clean set of hey, I have to just write code.
Sri Panyam 00:04:06 So getting a bit more of a visual framing of this, before MCP developers would write code, let’s say for example, access their own internal things, tools and so on,
Amey Desai 00:04:16 Right? Everyone was writing you’d say custom connectors or custom glue logic by and large.
Sri Panyam 00:04:21 Can you use an example of one of these things like in a real setting, we’ve all seen the get weather example on open AI space, right?
Amey Desai 00:04:26 No, I think get weather is tool calling at its,
Sri Panyam 00:04:28 It’s too simple. So what’s a real-world messy example? So
Amey Desai 00:04:32 So, I think a messy example largely I would say as an engineer would be Kube API, you have the Kube API, which you can do let’s just simple analytics around how well your engineering team is performing. Whether that’s in terms of number of contributions, whether that’s in terms of number of PRs that they have pushed out, how many PRs got reverted, how many was the number of lines of code that you would write. These are 5, 6, 7 different API calls for just one particular system. And you need to write a lot of scaffolding, custom code for how to make somebody like LangChain understand this. With MCP, it’s more organized and as a construct available where you define it as a tool call and then can have the REST of the information relevant to the problem statement rather than integrating with pretty much everything under the sun that the GitHub API might need.
Amey Desai 00:05:26 So that I think was a step change in terms of what developers are doing today. I think the other thing that is now getting to the forefront is using a lot of this framework, you yourself needed to write an orchestrator or how should those calls be called in either a chain or a sequence or in a conditional manner that only when kind of a finishes or the, let’s say not a but get all the engineers information, which is list users as a API call then for each user make an API call to get their contributions for each same user, make another API call to say how many PRS they reverted. Let just stick with those three as an examples, right? I have now created a workflow for which I have written very custom code for the use case or the problem statement that I have, which is I want to find out how my engineering team is doing just at GitHub as analytics. Now with MCP, I can list down those three APIs as simple tool calls and then let MCP take over how to do the sequencing of this operations. Which is I think what the step changes, the biggest step changes towards now.
Sri Panyam 00:06:37 Is the sequencing being done by MCP or by the Large Language Model that’s optimized for MCP?
Amey Desai 00:06:41 It’s the Large Language Model doing it, but MCP is enabling that to be done in a coherent structured manner. And you can reason through those things. When I say reason through those things, I don’t mean LLM reasoning, but as a software engineer you can reason through which call happened where whether there was access or not. And whenever that call happened, what was kind of the response for it and the accuracy around it. The alternative is you as an engineer is pretty much writing for every single API call, scaffolding code around what was the response that came in, how do I pass that response? What happened when that response errored out? If you think about like maybe if you’re familiar with Python, you have to write, try-catch exception blogs for each of these API codes. With MCP, you are relying on LLM to do that for you, which is a much easier way to build systems very, very quickly. So maybe if I can give an analogy, people previously were writing scripts to do this, but with MCP you can think of your problem solving more as a composition way of doing it. And each MCP you can think of it as a microservice or an AI.
Sri Panyam 00:07:46 Or an API, an endpoint.
Amey Desai 00:07:48 Correct. Which is much easier to scale purely from a latency and software engineering standpoint too.
Sri Panyam 00:07:54 So just to summarize and get a better visual picture of it. Before MCP, your example of knowing which of your developers are and what the stats are, you write three tools, one for getting the developers, getting their PR counts and then a third tool for summarizing it. But then you would also write scaffolding to kind of target, let’s say open AI to parse out that tool called information in one way, Gemini in a different way, CLAW in a different way. But now with MCP, you’re saying that, look, all I had to do is provide my three different tools and by the virtue of all them supporting MCP, you kind of solving the multiplexing problem.
Amey Desai 00:08:27 Correct, that’s a good way to put it. One other example if I could give is uh, and this might make it a little bit easier, is let’s say you just want to create a Python script that convert JSON files on your file system to CSV. And then you want to commit it automatically to a GitHub repository. You can clearly use a framework to do this with LLM calls and prompts to do the like JSON to CSV conversion part. What you’ll end up realizing is the prompt to do this is very small. I just want to read whatever is there as a JSON data and convert it to CSV. So if you, let’s say only have a JSON file with five keys and you want to convert that to A CSV with five columns, the prompt for that is barely four or five lines of code. Okay? But then you’re writing a lot of code to read JSON files, ensure that the JSON files are read correctly, do globbing ensuring that if it is nested structure that you can get through, do iteration on it, do the same thing for generating new CSV files and writing them out correctly maybe in the directory structure similar to what you had with JSON files.
Amey Desai 00:09:28 And then you are now writing code to also do committing it to the GitHub repository where now you need to handle OAuth and a lot of other things around GitHub as well as the file system piece. You can register all of these as tools with MCP and then let the LLM take over. And with the LLMs intelligence, a lot of that, call it scaffolding code, call it spaghetti code, but just the integration code is inherently being handled by LLM, which makes the amount of work you’re doing much less, which makes the amount of work you have to reason through just because now you’re looking at lot less lines of code much easier to manage. And this, there is an assumption here. The assumption is that the LLM is strong enough to do all of this. If the LLM is not strong, you pretty much are like we’re going to fail. And MCP showed with better models, this is a phase where we are getting closer and closer to.
Sri Panyam 00:10:17 That’s a really good point, right, about LLMs having the capability to orchestrate MCP, right? Taking a low weight model may not be as effective in this. Right. So how did the LLMs themselves have to evolve to actually be more aligned with MCP?
Amey Desai 00:10:32 I think one of the key unlocks that happened for MCP to get where it is, is the reasoning capabilities of LLMs and LLMs themselves having at least baseline the reasoning capabilities today. If MCP would’ve come out when ChatGPT 3.5 came in like late 2022, it wouldn’t have succeeded. The framework itself wouldn’t have led to good outcomes. So the fundamental thing that I think changes that models have gotten better and what we have realized over good amount of 2024 and 2025 is, models are going to continuously get better. There is this whole construct of we have trained on all the data of the world and synthetic data is being used now to make models better. The contrarian argument to that is DeepSeek, which came up with a new architecture for thinking with GRPO that allowed that model itself to get better. So the language model as the orchestrator improvement has enabled MCP to get where it is. The other part that I think MCP did is a lot of very good software engineering work in terms of the protocol that is established, which is JSON RPC 2.0 that allows it to have very strong interoperability across vendors, agents, run times, file systems and all of those things.
Amey Desai 00:11:50 So that is the collection of, I would say the, as a system architecture that what model context protocol laid out that has made it go ahead. And I think this might be a hard take, but I think of hallucination as a feature, not a bug. And I think that has started manifesting more and more in the last 12 months where people don’t treat hallucination as a bug completely.
Sri Panyam 00:12:14 Interesting. How would one go about, I mean this is off topic, but how would one go about treating or exploiting its nature as a feature?
Amey Desai 00:12:22 I think creatively, if you wanted to give you kind of more ideas and make problem solving with, I don’t want to say infinite space, but a bounded space beyond three or four actions, it’s the capability of an LLM to hallucinate allows it to kind of explore that search space much better. And LLM hallucinates, because there is trying to solve problems with whatever information it has, it is exploring third space to get to a particular resolution. If you RESTrict that third space completely with let’s say temperature equal to zero, in which case also it does end up hallucinating and the reasoning sometimes poorly still, you are not going to be able to do the language model as a orchestrator at all. You can only then do that for a few set of tools or a few set of functions, which is exactly what OpenAI did with function calling.
Amey Desai 00:13:12 You had your gateway API or get real the right time or add to numbers kind of thing, right? And you would need to set temperature almost, I would say to zero to get those tools to work correctly. Even if you set it up to a little bit higher, it would kind of fail with LLMs getting better, that problem to a degree got solved. And now allowing hallucination, if you treat it as a feature, you can start getting into the orchestrator part. Because now it can start to think and the reason, hey, here are some seven, eight tools that this engineer has put out for me or this engineered system has put out for me. How do I go about trying out different combinations for them? And if some combinations do end up getting wrong, that’s okay. This is going to be an acceptable path of being wrong and we’ll be able to move on past that.
Amey Desai 00:13:58 Certain domains won’t accept this, which I think is fine. And I think people are now clearly realizing that, that if you are trying to do LLM stuff, let’s say with rocket science where NASA space access is trying to send rockets up into sky with the human’s lives at risk, that’s probably not a good idea no matter how good an LLM gets. But if you’re trying to do just simplification of your workflows in your day-to-day operations or in the day-to-day businesses where you have, where you don’t have this much criticality involved, hallucinating is kind of okay, it’s not the end of the world if 90% or 95% of my job is happening without me even needing to do it.
Sri Panyam 00:14:37 That’s interesting take because humans also hallucinate, right?
Amey Desai 00:14:41 Correct.
Sri Panyam 00:14:41 And it’s a big part of how you solve problems, right? I mean in your own dev engineering flow, you’re not just going down a single path that the exact answer for the depth first way and you aren’t exactly trying at every combination at every level. The breath first way you’re kind of doing a star approach, trying a few things here, coming back, try something else, learn from it and so on. And that’s kind of what’s happening here too, it sounds like.
Amey Desai 00:15:02 And I think at least this is my take, I think hallucination is hyped up in for like the dumbest examples that an LLM fails at it can’t count how many hours are there in a strawberry or the, I think is 9.1 greater than 9.9 that and recently showed. I think I want to say if you focus on the more impressive things it can do, it can solve problems for you rather than the other way around.
Sri Panyam 00:15:26 I mean I’d rather the LLM generate a string or a character counter program rather than counting the characters themselves.
Amey Desai 00:15:34 Right.
Sri Panyam 00:15:35 So thank you. Going back to MCP and the wise. So traditional APIs, we talked about JSON RPC, but today there exists a very rich universe of API specs, this is just not PC, there’s GRPC, this REST and so, right? So if you look at the spec aspect of MCP, what value is MCP adding or traditional APIs from the specification point of view?
Amey Desai 00:15:58 I think the number one thing I feel it has done is it is vendor neutral. And you can say that the REST API is also.
Sri Panyam 00:16:06 Using open API specs, separate vendor neutral, right?
Amey Desai 00:16:08 Right. That is on the vendor neutral. No, I think before MCPI don’t think LLMs were vendor neutral. So with MCP it established parity with what everybody else had in the API land, which I think was an important table stake. Whether that was implicit or not happened by a side effect I would say is orthogonal, but ensuring that ended up getting people consistency that they can get the same set of responses or same set of reasoning, at least. That’s number one. The vendor neutral aspect ensures interoperability, which I think for a REST API also does exist. But you do need to write a decent amount of, again, scaffolding code or to work with that. The other part, which I think is different than REST API, and again I would say this is a side effect, is it is open source. It made people move all of their things also into the open-source ecosystem.
Amey Desai 00:17:01 It is easy to audit in terms of reviewing code and what you’re kind of signing up for. So as an engineer, they are work now is what exactly am I going to get from those endpoints? And by only choosing the tools that you want, you understand the explicit capability that is being granted. If you go look at the GitHub API, the REST API, you have hundreds of APIs there that you may or may not care about. But now when I’m trying to build a specific problem, which is I need to convert this file and then just convert it to my GitHub repo, I am understanding very explicitly that the capability granted to me is just that part of the problem statement. While with REST APIs you can think of them more as another specification for a general-purpose problem statement. For anybody to solve any problem at hand, while MCP now allows me to pick and choose what I need and do that very, very quickly. That I would say is kind of the difference in my mind, at least around MCPs and REST APIs as they have existed.
Sri Panyam 00:18:05 Iím going to drill a bit more into that, right? So there is the aspect of exposing what you need, which MCP requires you to do, right? That’s not a capability non-existent with APIs.
Amey Desai 00:18:18 It’s not. What I was I think trying to get at is I can only work on, if I am a company today, I don’t need to build an API specification with hundreds of endpoints. I can build API specification with one single part and with configuration allow different core parts to happen. MCP would allow me to also create my APIs now in a better way or in a more smaller slash concise way if that gets into what I’m trying to go there. So here’s a maybe a categorical example I can give from our own work today where we have seen improvements come. We as a data engineering company deal with directed basically graphs, tags, right? It has a source node, it has a sync node and it has a variety of transformation nodes that are there. Each of these today has an API for our customers itself. Where you have the pipeline as an API, you have a source node API, transformation node API and async API.
Sri Panyam 00:19:15 Are the REST APIs or just general APIs?
Amey Desai 00:19:17 No, they’re REST APIs. And this is available on our like docs.com.
Sri Panyam 00:19:21 So these are all resources that you expose as REST.
Amey Desai 00:19:23 APIs Correct. But now I am in a place where I can just create one API within the configuration of that one API, I can determine how things are going to go, which is just the pipeline API let’s say. And if I say that the list of input to that pipeline API is going to be an array of nodes. But the first and the last node can be kind of source and syncs and rely on an LLM to understand this basic construct very well and figure out what needs to happen as an execution item. Which is hey, maybe just give me information about this node: list, particular pipeline or list just information about the sync that is presented in my pipeline. Prior to MCP, we as a company built out five such APIs with Git post-hook as three different action that we can do. So an amount of code that we needed to write was like if you look at the MVC controller pattern and things around that nature, at least controllers as well as views and then the underlying database code for all of such five APIs for three different verbs.
Amey Desai 00:20:33 And that is a decent amount of code’s, like easily thousands of lines of code. Ah, I can do that with one API and if I make my specification reasonably well structured with 90% and this is actually happening live right now in our customer base itself, that one API is able to do all the work rather than me needing to do five or six APIs. And that one API is not calling all the other APIs. That’s the distinction I’m trying to make that one API has the right specification, which we of course needed to rethink now that MCP enabled it. But if somebody wants to just get information, the LLM marketer figures out exactly what to call there.
Sri Panyam 00:21:10 So what is it calling at that point?
Amey Desai 00:21:12 Underlying is just calling one function there, which is still get put and post. But all of those are just under one function that one object gets re-try rather than me distributing that, separating the object out.
Sri Panyam 00:21:23 Right. And at this point, is this a wrapper then? I mean almost like an n+1 offering.
Amey Desai 00:21:30 It’s a n+1 wrapper, which is correct, but with a lot less artifacts for it to manage in terms of a code base if you are looking at it. So the simplification is at the core base level.
Sri Panyam 00:21:43 So for companies who want to do this, you obviously have invested a lot in building out that service spec and the service implementations with the various words and so on. But if you were to start today, what would you do differently?
Amey Desai 00:21:54 What would I do differently is I would just have one API call and within that I’d rather, we have this already now where, which is we have one API call and within that one API call, we have different functions that are going in places to extract or retrieve or make updates for the end user. Rather than separating them out as different nodes just because those are easier for people to reason through. So let me try to think of maybe another example because I don’t think this example is necessarily putting the picture in place. What I think I’m maybe trying to get at is with APIs you have a very fixed nature and they’re very static. The way you define it. With MCP, you get dynamism as a first class feature right out of the gate. And that makes building software very different than what we were doing before.
Sri Panyam 00:22:45 Isn’t that kind of contradicting our previous thing? Because eco dynamism but dynamism went to mix and match the tools that are already offered today.
Amey Desai 00:22:53 That’s what people have started with. But what I’m getting at is I think MCP is reinventing how we are going to build software because of this nature.
Sri Panyam 00:23:01 Right. So back to your example of how would you do differently, what would that one MCP or the one all-in-one MCP API look like or tool look like?
Amey Desai 00:23:09 I don’t think it’s an all-in-one MCP API, but it’s kind of more along the lines of instead of me needing hundreds of API endpoints, I probably need somewhere along the lines of 10 to 15.
Sri Panyam 00:23:22 Public endpoints.
Amey Desai 00:23:23 Correct. That’s the distinction. And that as a software engineer and if you’re using MCP, even to build software systems makes it, I don’t say much easier, but it, the amount of work you need to do is a lot less. Which I think is going to make a step change in terms of just what you can do with the rest of that time that you have now.
Sri Panyam 00:23:43 Of course, I mean I think there’s no doubt about the huge, huge productivity boost just using Claude code and cursor and MCP tools, right? Anybody who’s saying otherwise I’m sure is there’s a lot of reflection to be had there. Yeah. Thank you. Thanks for clarifying. Where APIs and then where MCP begins, and I think you’re right, we are in a very early stage of how to actually build software in this new way. Now obviously, MCP’s out there, Google came out with their A2A competing standards. What’s happening there?
Amey Desai 00:24:11 So it was kind of interesting because I think Sundar had tweeted out to MCP or to not MCP, similar To Be or Not to Be, from Shakespearean before I think the A2A kind of came out. So you can think of MCP for a lack of a better word right now, like a vertical solution that I have this kind of specific verticalized problem. I’m going to solve this problem using MCP with tools, resources, and prompts. And that’s exist as one single LLM call and you can of course combine them but as one single call. A2A, Google came out with which is agent-to-agent allows different agents even across different organizations to interact with each other and solve problems. Which is again like a fairly different architecture than what even we were capable of. So you can now look at a company like GitHub, Bitbucket and GitLab and you can make all of those talk to each other.
Amey Desai 00:25:08 The reason why this is kind of interesting and this is actually interesting for us, as an organization, is we do support on-prem deployments, meaning our SaaS stack can be deployed and our customers clouds itself. And that’s not too for us but a lot of other companies now, it’s almost like table stakes today, right? So the same example that I gave you around engineering velocity etc., that I’m trying to, let’s say compute, I have engineers who are also contributing to my customers GitLab stuff, to Bitbucket stuff so that the private install is on sync, etc. So if I want to get a complete picture, I’ll also need to be able to do that. With MCP, I need to effectively write three MCP servers that can do this today. With agent-to-agentÖ
Sri Panyam 00:25:50 One for each company handling its own kind internal tooling, is that right?
Amey Desai 00:25:53 Exactly. And there is a multiplicative effect there, right? One for each private in third other dependent on how many they have one for each private install. Agent-to-agent kind of solves that piece of the problem. So from vertical it makes it horizontal. And that’s the big distinction that I have with I am mentally as my mental modelist on thinking about it. It also has a few other interesting properties around agent cards. I think of it more as agent resumes where maybe you can figure out what is the capability of this agent, capability in terms of the reasoning capability as well as how will it stream updates, how it’s going to negotiate all of that machinery. And Google did make a pretty good effort in solving the security problem also. When two agents can talk to each other, if you figure out from the protocol how to set all of those pieces up, the hot tick I have is the reliability of agent-to-agent is much, much less than MCPs. Because it’s just standard distributed system probabilistic math. I think a lot of people when they think about let’s say the SLA numbers, right? 99.9999 and so on so forth. If you have one system that gives you 99.999 and you make a distributed system out of it, your SL numbers actually goes down just because of like standard probability there with expectation. And if you have this across end things, you can actually go down to the number of 60 and 70%.
Sri Panyam 00:27:18 Unless your replication andÖ
Amey Desai 00:27:19 All of that. Exactly. All of that machinery. Right? So agent-to-agent is literally like I would say trying to do that right now. And I think it’s a very, very hard thing to get right because of just that reason alone that how confident can I be that the GitLab agent that somebody else has built is like very, very good if I don’t even like get to see what is done there.
Sri Panyam 00:27:43 You’re talking about availability, right? And SLOs from availability perspective, I think before even that, what about correctness? Now that you have different agents performing different things, how can you be sure?
Amey Desai 00:27:53 No, that’s what I meant from a reliability standpoint. I meant by correctness at how I know that that agent-to-agent thing is actually even going to work correctly. Like I almost have to try, received run and it’s almost like with MCP, I can unit test with agent-to-agent, I have to do integration testing, right? And it is just a much harder problem to do.
Sri Panyam 00:28:12 So agent-to-agent, I mean you’re using A2A to orchestrate across multiple MCP.
Amey Desai 00:28:18 We actually are not, we tried it and we were not, the reliability was too weak so we did not spend too much effort into that.
Sri Panyam 00:28:26 Right. And what was causing the low reliability? Were there any general patterns or? Well
Amey Desai 00:28:31 Well there are think two main things with MCP, it is open-source. So it’s a little bit easier to reason why agent-to-agent is not necessarily completely open source like KPMG as an agent or Accenture as an agent. I just don’t have access to it. So the only thing I have is whether it works or not. And if it does not work 70% of the times, it’s very hard for me to build a software system or business around it. So that’s the only data point that we have that whether it works or not does not work.
Sri Panyam 00:28:56 From an abstraction perspective, I mean just for layman like me, how would A2A differ from yet another agent orchestrating between two agents?
Amey Desai 00:29:06 It doesn’t in that sense there’s a lot of machinery there, which is around the agent cards aspect and the structured task objects that A2A has. So Google introduce this notion of kind of agent cards where effectively you are writing textual description of what a particular agent does and it relies on you now as a, whoever is building a particular agent system here to define that agent card in a well-structured manner. And you are trying to write that in a much more generic way. Because you don’t know what exact problem might be thrown at you, but the capabilities roughly of what you have. But you’re still, you have to be generic. You can’t like be very, very specialized because if you end up making it very specialized then you can’t solve the general-purpose problems there.
Sri Panyam 00:29:52 Almost like an MCP or a tool catalog.
Amey Desai 00:29:54 Correct. And in that way it allows you to at least give the impression that if you were to do this agent card, think of it as like you trying to hire someone and you get a resume, you get a sign that whether this person is worth interviewing or not, right? But whether that person will actually work or not, you kind of need to go through the interview process, hire them, see them work for maybe three or six months and then you kind of know. So that’s the timeline you are actually looking at for an agent-to-agent protocol to give you a reasonable answer about whether it is going to work for a business or system or not. And that is very hard. You could have a templated version of agent-to-agent for very specific problems where it can shine and work very well. But then the flip question is then why not just use MCP if you’re kind of breaking around the horizontal abstraction and making it a templatized vertical itself. That’s where the simplicity of MCP in my mind wins. As an engineer building something, it’s much easier for me to build it.
Sri Panyam 00:30:53 I mean it almost felt like an m+2 at this point.
Amey Desai 00:30:56 I would actually say it’s I think a little bit m+n. If I could give you, maybe an analog, it’s basically how peer to peer communication with like so bit Torrent and Kaza was in late nineties and early 2000 where if you, I grew up in India so pirated movies were a big thing and it was all happening on such platforms and agent-to-agent is kind of going in that direction and whenever in India at least internet back then was like really poor. So if you downloaded 70% of the movie and your internet crashed, you lost the movie. So same thing is happening with agent-to-agent. Like it works well but then when it doesn’t you kind of have to just gets completely unrolled.
Sri Panyam 00:31:33 And also the other interesting thing you mentioned about agent cards and agent resumes, I suppose it’s only a matter of time before we start seeing the Sohem Parikhs of agents.
Amey Desai 00:31:42 Correct. I think a lot of people have said this that we’ll have managers like humans as managers of agents in the future, which I do think will happen. Maybe not to the level at which everyone’s projecting right now, but if I am managing an MCP workflow versus an agent-to-agent workflow, like I know what one I want to manage today.
Sri Panyam 00:32:02 Yep. I mean I’m sure they’re going to have various personalities that you have to train up for.
Amey Desai 00:32:07 Right.
Sri Panyam 00:32:09 You know in software there’s obviously standard design patterns and architectural patterns. Are you seeing that in the MCP world? Where are we in terms of patterns and what levels of maturity are they in today?
Amey Desai 00:32:20 I think the standard, I would say software pattern like we went to around like the peer to peer one, which also kind of exist in MCP world if you want to create that definitely is there. I think one pattern that we ourselves I think got a lot of mileage out of is the auction bidding pattern. Which I think is a little bit present in the ads world in terms of Google. Where the auction bidding, to a level is you have a problem that you want to solve. I actually give a specific example. Let’s say have a data labeling problem that we want to do on our data sets or whatever data that we have right now. Based on the type of data I may want to use a different kind of labeling agent. If the type of data is an image, I can potentially use the image labeling tool or prompts that I have created.
Amey Desai 00:33:10 For text it might be that person, for audio it might be a different agent, for video it might be a different agent. And you might have a combination also of that, the composition for it. So you kind of present this as a problem from your orchestrator, which is like the LLM within the MCP world. Actually an orchestrator that sits on top of your MCP world that hey, here is a problem, give me the information about how much will it cost you, what would be the latency and what would be its accuracy.
Sri Panyam 00:33:38 Could this be another MCP tool?
Amey Desai 00:33:39 Correct. It’s an MCP tool but with a very specific orchestration it’s objective as the only function. Yes, correct. Right. And then as a tool, I can then start taking a decision. Okay, this is what the task is, this is what I got back. Now I can award the contract to the best specific MCP tool to go and execute on it.
Amey Desai 00:33:58 And then it’ll report back me with results and I can then also take a decision about is the result meeting the contract that was given. If not, I can also go and ask another tool that was there. Hey, can you do it? You are like the second person who bid for it. The first person has kind of failed. This is why they failed here is that’s why I’m kind of choosing you right now. This actually did help us do two specific problems at work in the last few months. One was kind of the data labeling part but the other one was building a PDF orchestrator. You probably know that at this point unstructured data to vector databases like the bread and butter of data engineering for the LLM territory. Right? But the problem is people when they’re working on hundreds of PDFs, I think it’s an easier problem.
Amey Desai 00:34:42 And you suddenly get into enterprises with hundred thousand PDF files. If you run all of those a hundred thousand PDF files through vision LLMS or through high expensive LLMS like four oh et cetera, your LLM bills are really high like you and you’re spending anywhere between 25 to 50K just on processing 10,000 PDF files give or take 50 each pages with images, graphs and texts around it. But what you can do now here is you can for every page effectively make a decision, hey this just looks, it has pure text and some of the text might be formatted but open source, basic Python library, call it PDF plum or in the Java world, I think there are many such libraries over years right now I think I’m forgetting the name of it. I can just have that as a tool call. I don’t even need to use an LLM to like understand the PDF either at an OCR level or at a vision level.
Amey Desai 00:35:35 Just like that basic thing works. That also gives me cost efficiency. It is also very latency efficient, because I’m literally doing this in process. So this is one pattern that we, I think weíre able to do with MCP really well and just hopefully gets across trying to build this as a system would take multiple quarters or at least when I was at Google, it could take multiple quarters with very well qualified engineers and teams to even get it right. But here now there are like a me and another engineer who have just pulled this out in like a matter of weeks. Which to us is like the remarkable part. And of course it’s not like that. It is, it works all the time really well. But the fact that for me seeing what I had seen being built versus now what I can build the time difference is very, very glaring and comes out very obvious.
Sri Panyam 00:36:27 I want to go back to the orchestration layer you had talked about. Just a few minutes ago, right? How does the orchestrator evaluate the result from the different agents to know if it should make a different choice, if it’s acceptable or not?
Amey Desai 00:36:42 It’s a controlled problem that you need to do. Control the nature of that problem is two things. One is prompts where you need to treat prompts as a specification, as a software. And then you also need to have a policy enforcement layer in addition to your prompts. And that’s how this orchestrator works pretty much.
Sri Panyam 00:36:58 But in your example, what was the metric for measuring that this agent’s result was this?
Amey Desai 00:37:03 Meaning post execution or pre-execution for like the reward assignment or after the execution is done?
Sri Panyam 00:37:09 Well before the execution, right? Like you would get the result from one agent and you need to know how good it is?
Amey Desai 00:37:14 It’s interesting, right? So that is actually coming directly from the end user itself. Because the end user now can make that decision. What do they want to prioritize? What is the trade-off that they want to do? Do they want to see quick results? Do they want to see the best results or are they looking for a hybrid and kind of let us decide for that. And the hybrid is kind of, I would say the hardest thing to do. We actually don’t offer that as an option. We make the user choose because a lot of people when they’re trying to experiment, they want fast results and then when they want to productionize, they want the better results.
Sri Panyam 00:37:45 But when you are labeling, you know, terabytes, terabytes of data, right?
Amey Desai 00:37:49 I think that is a little bit, we at least haven’t had terabytes of data. I think a lot of people today in today’s world, they’re trying to generate fine-tuned data sets, which largely are around like a hundred thousand samples. It could still be a terabyte, especially if you have video or audio data. But the number of samples is much less. I think the terabyte number of rows like billions of rows was a problem I think eight, nine years back when people are trying to build the systems. Now that those systems are built, the volume of the labeling actually has been very different.
Sri Panyam 00:38:18 Interesting. So the user is still involved in kind of providing hints aroundÖ
Amey Desai 00:38:22 Yeah. This is definitely human in the loop system. It’s not an autonomous system. I don’t think MCP or non MCP we are at that stage where you can have a very good agent system that is truly autonomous. You need a human in the loop today. I think you’re going to require a human in the loop for at least another decade if not more.
Sri Panyam 00:38:40 So what was the end-to-end calendar time for one of these steps to kind of happen in that system we described before?
Amey Desai 00:38:47 The entire thing that I let’s say, described in terms of choosing execution to happen for the PDF orchestrator for example for 10,000 files, it took about one and a half day.
Sri Panyam 00:38:57 Okay, and how many human interventions were in there?
Amey Desai 00:39:00 I think for one specific enterprise customer in the asset management space there were about four human interventions for this entire thing.
Sri Panyam 00:39:07 That’s pretty good. That’s pretty good.
Amey Desai 00:39:08 And the four human interventions were kind of largely around the orchestrator and the LLMS itself kind of recognize different kinds of patterns that each file have. They’re also learning from some of these patterns of files and the four patterns, they were just too diverse. It’s not like for every little diverse pattern it makes an interaction with a human. But for when there is too diverse, there is this notion of you can say confusion matrix, where is this like not something that you can solve by yourself and that is when you engage the human. The interesting part here is building this orchestrator. So we kind of build this orchestrator on top of MCP and using MCP you have to use LLMs, you have to use policies, but you also have to use build just status quo machine learning models which are going to help with such decisions. And that I think is the culmination of how all of this works well together.
Sri Panyam 00:39:57 My next question was how did you come up with the confusion matrix? Because even with a hundred thousand files, it’s a lot of samples. So were you sampling it, were you…?
Amey Desai 00:40:04 No, initially we would have a sample run for sure because this is like a folder structure, right? The folder structure has basically 10,000 files and you immediately get all the file names so you can very quickly random sample. And actually this is where LLMs are very impressive. Everybody writes good file names, nobody writes file names as one dot pdf, Two dot pdf — especially in enterprises that things are written out well. Using those names itself, a human itself can infer exactly to what are potentially diversity of data that might be there. And so can an LLM. So the random sampling now is actually a much more intelligent random sample. There is this construct in CS, which is called as importance sampling. And important sampling was largely probabilistic based, like when the algorithm was created. What we are saying is, going forward important sampling is where importance is determined by an LLM for a problem where it can do that. And data mining, the importance of a file or diversity of data can be framed very easily because it’s a text problem.
Sri Panyam 00:41:07 You know, it’s funny because typically pre LLMS, pre MCP, the engineering instinct was give entities like IDs like one dot PNG, two dot PNG etc, right? And I think having actual names on the file names is a very important signal that LLMs have learned to exploit pretty effectively.
Amey Desai 00:41:27 And I think there is a very good flywheel effect of this. Now if you’re using LLMs to output also such things, we are going to output sensible things.
Sri Panyam 00:41:36 Yes. It’s almost like we’re returning to clean naming I think. I think naming as a hard problem as is getting solved right now, right?
Amey Desai 00:41:42 Correct.
Sri Panyam 00:41:43 Moving on, I want to talk about enterprises. I mean MCP is fantastic. I think that every day you see a lot of new advances coming out from startups and from, Agile moves, right? Enterprise. What are the adoption challenges, hurdles? First of all, assuming there are challenges. How are enterprises doing MCP today and where can they be?
Amey Desai 00:42:05 I think enterprises as a whole are struggling to do MCP today. That’s number one. Number two, I think the struggle is by and large around the operations around the MCP servers rather than MCP itself. Because the defacto model when MCP came out was hey, here is a GitHub repository. Check out, run it on your local laptop with Claude cursor and whichever kind of MCP host slash MCP clients would support. And that’s kind of the end result that you get. So it was still kind of very much based on in a local machine setup. Now we have a bunch of the new work that has happened which has allowed people to do this at the server level and it has kind of simplified that. But one of the weaknesses of MCP protocol is by and large the security aspect around it. So prompt injection, MCP to some level gets prompt injection on steroids where now you can do command injections within your MCP to do tool calling in an adversarial manner. So like I gave you the example of file system, right? Hey read these Json files, convert it to CSV. So now I have opened my file system via MCP to my users. Now somebody can go and do whatever they want. File systems have commands. Most LLMs know how those commands should run. I can just go and do RF minus RF star and that’s it. Like now you’re completely messed up.
Sri Panyam 00:43:26 Or build a tool that does that to kind of segment a clear R calling.
Amey Desai 00:43:30 Exactly. So I think that is a big issue that people are trying to address. And that is not an AI engineering issue. It’s a core, I would say software engineering issue. So the problem that I think enterprises are thinking about is some of these limitations of AI are also going to be solved by AI and they try to solve it via AI and it doesn’t work. And what we are trying to kind of get them to with our experience and address is some of these problems are actually not for LLMS to solve. They have to be solved by engineering companies and companies like ours and a lot of other companies in the market right now. And us solving them would help you get MCP kind of generated return of investment in a safe secured, governed manner. I think the adoption is happening, it’s the scalability across the organization is what is not happening today because of such concerns.
Amey Desai 00:44:25 And also the MCPs still very easy to get started for engineers. It’s not very easy to get started for a non-technical person. Imagine a person who’s an SDR. If you ask them, hey why don’t you do GitHub for the Salesforce MCP server? And then you can kind of just start talking to it. It takes them, I’ve seen people succeed at it, but it still takes them a couple of days to know, okay, what is MCP.JSON? What do I give it there? Where do I put it in cloud? Just kind of solving the software part. So an on boarder of MCP is kind of what is missing for the non-technical audience today and keeping up, I think there’s a little bit of formal that if I don’t get the latest version, am I missing out on a feature that others have or my other competitors have?
Amey Desai 00:45:09 And those get harder decisions because if you don’t have it, it’s like you’re trying to do software version upgrades, right? And now you are trying to do software version upgrades with like formal in build into it, which was never the case before. People are very thoughtful about doing upgrades because they don’t want to break what is already working. While right now it’s almost the other way around where you want to get onto the latest because the latest is going to make things better by default, but it is going to also break your stuff and you don’t know how it would break at all to some level as a non-technical person while an engineer can control that a little bit better.
Sri Panyam 00:45:42 Well, I mean talking about security, you prompt injection for example, it’s still, I mean yes it can be a software problem but it is largely an AI problem or rather it’s actually exacerbated by reliance on prompts.
Amey Desai 00:45:56 Correct. But I think prompt injection will be non-AI problem over time or will get solved with non-AI technology. How so? I don’t know yet. I don’t think anyone has cracked it yet. But trying to solve the prompt injection problem via variety of techniques that we have done, even actually evaluation for that matter with variety of techniques that we have done like LLM as a judge, they don’t solve the problem. I think they give you a perception or an impression of solving the problem, but it is going to require rethinking some of the, how we have built some of the system. It’s like how homomorphic encryption or some of these other techniques came out for the problems that came last in the last 10, 20 years. That’s what AI has brought to the table now and I don’t know of any, but I’m pretty sure in academic circles there are a lot of people working on this to solve it without AI and I think that’s where the unlock is going to come.
Sri Panyam 00:46:49 Are there any insights that are pointing in this direction or are they still based on hope?
Amey Desai 00:46:55 I think right now there’s a little bit of hope more than insights that at least from my perspective, more other people might have, but I don’t know that yet. It’s also because we have not been focused on that as a problem statement
Sri Panyam 00:47:05 Right now. We are still in the breaking phase, so.
Amey Desai 00:47:07 Right.
Sri Panyam 00:47:08 Well what about the other traditional security problems like access management, rules and authentication? Where and why do MCPs lack today and what can be done?
Amey Desai 00:47:17 I think the protocol itself is lacking that information and there’s a lot of work going on right now to add those things to the protocol. There are a couple of proposals that have been put out like similar to RFC, but nothing yet has manifested. I expect that to happen over the next year or a couple of years actually. But at least it’s going in the right direction that people are thinking about these problems today. But no one has solved it yet. Everyone’s in the process of framing the problem very well. And then kind of going and because just the mention around access control and everything as a generic statement is I think fairly obvious that we need to solve it. But what does that even mean in an MCP world I think is getting figured out as people use MCP more and more.
Sri Panyam 00:47:57 Are there any open, almost like formal specification of these challenges?
Amey Desai 00:48:01 I believe so. I don’t know of the top of my head, but I remember it off the top of my head right now. But there are a couple of proposals that I have seen in the community around this.
Sri Panyam 00:48:10 What do you see would be challenges for authentication on the MCP world that are new compared toÖ
Amey Desai 00:48:17 I think it’s going to be more around the friction aspect about it. Like if you introduce some of the solutions that I think people have pushed forward right now, do you sacrifice the adoption curve?
Sri Panyam 00:48:28 Right.
Amey Desai 00:48:29 And that’s I think the hard part right now and which is why I think there’s a lot of resistance from the AI engineering side that no, I want people to use my tools. Right. Even if it comes at a little bit of risk, which is a little bit shortsighted. I think that’s where what the challenge is largely going to be.
Sri Panyam 00:48:45 I guess you don’t want the AI user running for the TFA, two-factor authentication every time there’s a new completion, right?
Amey Desai 00:48:51 Correct. Even if you take an example of like someone like Zoom, right? They just went on the Mac, they side-stepped all of the security issues, so to say with a Mac app they were able to get a lot of users while like Google Hangout, Microsoft teams have all of this proper authentication built in, but adoption became a thing. Now adoption is a thing for them, but it was not when Zoom kind of just went up. Having said that, I think there are certain problems that I think definitely we need to solve. Maybe let we talk about the specific ones, right? Like one is just like OAuth tokens, token theft is going to become a thing if, especially if you’re using MCP servers because everything, with all, if you’re talking to, all you’re doing is APIs. Either you’re exchanging API keys or you’re exchanging OAuth tokens. So if you set up the right tool, somebody can potentially abuse that tool to almost make a man in the middle attack to do token theft. And I think there is a work that is being put. The most basic thing here is we should encrypt tokens at REST, we should use short lift credentials, we should rotate any kinds of tokens regularly and just like isolating servers part. So these are the things that they, people are just adding to the specification right now and then everybody will be able to implement it much more easily.
Sri Panyam 00:50:02 So in other words, catching up to standard security practices.
Amey Desai 00:50:05 Correct, that’s not. So the prompt injection problem is kind of what I was trying to get at.
Sri Panyam 00:50:09 Right. Right.
Amey Desai 00:50:10 I don’t know how we’ll solve, but definitely there is a, or it’s an open green field space right now.
Sri Panyam 00:50:18 Right. I think back in 2008 it was, there’s an app for that. Now there’s an AI for that.
Amey Desai 00:50:22 I hope there is not an AI from prompt injection personally because SQL injection did not need AI, right? To solve, I mean it’s not completely solved still, but still at least it’s better than where it was when it came out.
Sri Panyam 00:50:33 It needed SQL then.
Amey Desai 00:50:34 Right.
Sri Panyam 00:50:35 In next slide itself. What are some of the security challenges you are kind of prioritizing and focusing on?
Amey Desai 00:50:39 I think outside of the MCP world, one of the key challenges we have run into a tool problem. So one is around chunk level ACLS, especially in the whole vector databases, drag deep research style problems where how do you have chunk level ACLS, which is pretty much what role level ACLS in SQL databases at chunk level ACLS for the document databases. That’s number one. And I think the number two problem is there is the security delegation problem, which is, and here’s what I mean by that. People who use Nexla and any APIs and any connectors, any data systems today, they typically have a credential that opens up the pipes to be able to access data, That gives you, in my mind, purely authentication and opens up the pipes. But then there is an authorization layer on top of it, which is I might open the credential to my SharePoint system, but within my SharePoint system, every file has its own access control whether a user can access it or not. And that is true across Salesforce for Salesforce objects. It’s true across ServiceNow. It’s true across every system that you have today. Because every system has a access control system under the hood for it. How do you do this correctly is question one. How do you do this efficiently? I would say is kind of question number two. And these are the two I would say maybe non MCP, but very much relevant on how do you get the right data with the right access to MCP as a problem statement that we are trying to solve.
Sri Panyam 00:52:08 Right. And how are you doing? I mean what’s, I mean, can you share the progress? Can you share?
Amey Desai 00:52:11 So we have since had like an access control system, which is similar to Zanzibar, which is something that kind of Google open sourced like a while back, which is used internally within Google also. What we’re doing right now, we also treat this as an integration problem. Instead of integrating with the data system, now we too integrate with the authentications and authorization system itself. That’s kind of one piece of the puzzle. We need user consistency across the systems, meaning [email protected] and [email protected] need to be the same person. So we need to ensure that the same user is available here. So when they sign up the two factor or with the right email addresses and all of that should be done correctly. And then we need to pull in effectively [email protected] and the access control information associated with that as metadata that’s going to be associated with whenever they make a data pipeline.
Amey Desai 00:53:05 And this is where actually I would say is has been one of our strengths where we are not just seeing data. Metadata is just a schema or the samples data pipelines and data engineering in general has a lot of operational metadata. A lot of people think of it purely in terms of the Crohnís and the schedules of the world, but it’s this access control thing. Who has access to what till, how much time, when does that change? Things around that nature is what we are solving right now. And we are actually already solved it for a bunch of our customers who are using RAG in this manner with us. It’s not that we are guarantee we’ll give you the most accurate answers or the best answers, but what we are saying is we are going to have secured, governed and answers that only you have access to and not anybody else.
Sri Panyam 00:53:51 So, as we wrap up, what advice do you have for enterprises?
Amey Desai 00:53:55 I think the biggest advice I would have right now is ROI, or Return of Investment by and large would come, it’s not evident right now potentially because of the dollar number going up, but it’s very much evident in the productivity boost that everybody else has. So being an AI skeptic is like a 2023 problem. You can’t be an AI skeptic in 2025, 2026. Quite frankly. There might be a bubble burst in all of that with, because a lot of people are solving a lot of problem and a lot of money is being thrown at it. But this is effectively what the, I mean, I don’t want to say what the internet did, but the analogy here is the actual adoption of internet made physical stores become digital stores and that caused a difference in how our economy and everything else grew. The same thing is going to happen for SaaS kind of AI taking over.
Amey Desai 00:54:45 So that I would say is a thing that’s going to happen for a lot of the don’t over index on the hallucination and prompt injection as a problem. Instead of that focus on the access control governance and security as problems that you need to get in the hands of your users or in the hands of your employees in a safe and secured manner so that you can still unlock a lot of value for them. That kind of would be one and two. And then maybe last three is a sheepish spiel try our company out. Next layer we have solved a bunch of this problem including orchestration and security and governance for large enterprise customers across hundreds of internal employees for them.
Sri Panyam 00:55:25 Sounds good. What about our listeners? I mean obviously our listeners are very excited about learning this. How would you recommend they get started or advance their learnings?
Amey Desai 00:55:34 I think if you are an engineer who’s not working in AI, think of spending 20% of your effort at least working with, with AI. And that does not mean using Cursor, but it actually, I would say trying to solve a problem using AI and see kind of where that would go. Here is a very, maybe a prescriptive example, right? Everybody wants in software has to do releases. The fundamental thing that is push code to GitHub, start a build a job, run integration test, maybe deploy to a canary or staging pipeline, see it work well and then deploy to production. There are about six, seven steps here. You can build an MCP tool for this in probably a few days to maybe a week if you have all the access tokens, etc., and all of that. Build this out and see how productivity shift it makes for you.
Amey Desai 00:56:22 Where instead of of thinking about these eight things now you just give a command and then you come back and it is done. These are the places where you want to use AI to simplify your life as an engineer. As a non-engineer, I think a lot of people can probably use AI in their day-to-day things around meeting summary and meeting transcripts and a lot of things around that. But one thing that I would maybe kind of recommend think through is how can you, like, let’s say you have your manager, you have your CEOs make your MCP tool or your ChatGPT or Cloud, give them the representation of what that manager or the vision of that person is and see how you can help them manifest that vision. And that can make a huge difference in how you work on a day-to-day basis operationally yourself for getting your work done.
Sri Panyam 00:57:09 Thank you. So it sounds like just embracing it, experimenting with like scratching your own itch Yeah. Are key parts of it.
Amey Desai 00:57:13 Yeah, pretty much. That was a great summarization.
Sri Panyam 00:57:16 No, no, you do the hard work. So before we close any other advice?
Amey Desai 00:57:21 One last advice would be things that look simple with ai. Give them a shot much more than things that are maybe not that simple and will take you a lot of effort to get started. Simple things solve simple problems much easily. And that gets gratification instantly.
Sri Panyam 00:57:38 Thank you. It’s been very, very insightful, engaging, and fun. Thank you for being on the show.
Amey Desai 00:57:44 Yep. Thank you.
Sri Panyam 00:57:45 And to all listeners, this is Sri Panyam from Software Engineering Radio. Thank you for listening.
[End of Audio]


