Llewellyn falco

SE Radio 595: Llewelyn Falco on Approval Testing

Llewelyn Falco, creator approval tests, talks with SE Radio host Sam Taggart about testing code in general and the various types of testing that developers perform. Llewelyn elaborates on how approval tests can help test code at a higher level than traditional unit tests. They also discuss using approval tests to help get legacy code under test.

Data Annotation Tech logo

This episode sponsored by Data Annotation Tech.

Show Notes

SE Radio Episodes

Other Resources


Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Sam Taggart 00:00:56 This is Sam Taggart for Software Engineering Radio. I’m here today with Llewelyn Falco. Llewelyn is an agile technical coach and internationally renowned speaker. He’s the creator of ApprovalTests, co-author of the Mob Programming Guidebook and co-founder of Llewelyn is here to talk to us about approval testing today. We have talked about testing on many previous episodes such as 516 with Brian Okken on Pytest, 431 with Ken-Youens-Clark on using unit testing for teaching. And if you dig way back in archives with Kent Beck talking about the history of unit testing way back in episode 167. So welcome Llewelyn. I’d like to start by just discussing the general testing landscape and, and how developers are testing their code today and how approval testing fits into that whole landscape.

Llewelyn Falco 00:01:44 Well, so it’s a large landscape, right? And I tend to straddle two very extreme sides of it, right? So as you mentioned, like I’m the creator of ApprovalTests and so we do a lot of work in the open source world and usually on approval tests, although I work on some other projects as well and in that world, testing works really well, right? Like, so we’re doing test first development. I have like a standard python mob that meets every Sunday and we usually do about two hours of work. And in those two hours we usually release a feature and I mean like finish and release, right? So every two hours we are pushing a new version of the software out to Pybuy. Likewise, like, like I have a, a guy I meet up with Lars for the Java approvals and we pair and we can usually do a feature in two hours as well, right?

Llewelyn Falco 00:02:32 And then again that gets released to Maven immediately. So that is, the tests are great, the code is easy to work with, the whole DevOps pipeline is in place and these things support each other, right? Like I wouldn’t feel safe to release so quickly if I didn’t have my tests there really like protecting me and telling me, hey, it’s okay to do this. That’s not the only thing, right? Like there’s a whole other section to DevOps that do that. But also, you know, in that world we have Dependabot and if somebody updates a dependency, we detect it immediately and automatically and then we get the pull request and because we have good tests, our tests will run and if our tests run, it will automatically merge. So like when Log4j came out, we didn’t even notice, right? Like they released the patch, our system detected it upgraded and released without us knowing.

Llewelyn Falco 00:03:26 But the other side of that is the clients that I work with, right? So I’m a technical coach and what that means is like companies bring me in and we sit with their programmers and we program together and we learn to program better. But the thing is like the companies that bring me in are never the companies that are doing really, really well, right? Like they’re always companies that are struggling. It’s really unfortunate. If you look in the world of sports, the athletes that get the most coaching are the people who are at the top of their field, right? Like Kevin Federer is amazing and he has like 12 coaches, right? Like it’s just like a whole ecosystem taking the people who are the best and making them even better. But very often that doesn’t happen in software. Like if you’re doing okay or you’re doing good, we’re like okay, we’ll leave you alone. And it is when they’re struggling they were like, okay, now, now we’ll send in help.

Sam Taggart 00:04:17 Yeah, I had a very interesting conversation with a friend of mine. I was complaining about a specific framework and how all the projects I got that were written in that framework were horribly written and he made the comment, well if they were well written they wouldn’t have called you. So I thought that was kind of funny.

Llewelyn Falco 00:04:31 Exactly right. . And so there, I’m seeing the opposite side. And, and on that side, almost universally everybody has tests. They might not have tests in a specific project, but they definitely have tests. Like if you, if you were to zoom back out, right? Like say there’s the company has like a hundred projects, probably like 50 or 70 of them have tests of some sort. A client I was in earlier this year, they were using Sonos and Sonos does a lot of code metrics and will gate the check-ins and they would not allow you to check in new code if it didn’t have the coverage. But a lot of their code was not designed in a way that the tests were really helpful for the engineers. And so we wrote some code and we split it up and we tested it and we knew that it worked and we used a thing called executable command tests, which are really powerful tests but they don’t really increase your code coverage because the idea is like they are acceptance level tests, right? So you can imagine the nice thing about acceptance level tests is they sort of like they’re system wide and they do this great thing, they give you a lot of assurance that the thing works but they’re very hard to set up and conduct and keep consistent.

Sam Taggart 00:05:51 I was just gonna interject and ask about code coverage because I wanna make sure that our audience understands exactly what we’re talking about. So when you say code coverage, what do you mean?

Llewelyn Falco 00:05:59 Right? So code coverage is a misleading term. It is the percentage of lines that are executed when you run a test. And unfortunately the word implies that these lines are covered or protected in some sort of way. That’s not the case. You could write tests that you know, runs something and then an exception gets thrown and then it catches the exception and swallows it and doesn’t assert or verify anything. And that would be great test coverage, right? But it doesn’t in any way protect you but it’s an easy thing to measure. And measurements and metrics are really important to managers. And so a lot of this stuff gets done. And so we would write these tests and literally we would call them tests to increased code coverage, right? And they were horrible tests, right? They didn’t protect us in any way but what they allowed us to do is commit our code because we knew everything underneath it was covered and was safe and like this is the part that we needed to not cover, right?

Llewelyn Falco 00:07:03 But we couldn’t not cover it because Sonos would complain and we were at least honest about it, right? We were like, okay, Sonos is forcing us to do this. Let’s call these tests to increase code coverage. But then when I would move and work with other teams, I would seek tests that were not called that, that seemed to have important names like tests to verify the system is calling the sub-projects correctly or subsystems correctly. But they wouldn’t, right? All they were doing was increasing the test coverage too. And in those systems where the names were less honest, I think the developers thought that’s what testing was, right? A test is something you do to fulfill a safety checkbox. When you are writing code, there’s more and more like, okay, I enjoy the podcast and I know that everyone here who’s listening is mainly a developer, right?

Llewelyn Falco 00:07:54 Which is unfortunate because the problem I’m about to talk about usually is not something that developers create. It’s something that developers are part of, right? But they’re, they’re sort of out of their part. But a lot of the developers who continue to actually improve themselves through their career grow up and become managers at some point. And so if I’m talking to someone out there who in the future becomes a manager, I hope this part resonates and you remember, but there’s this huge problem I’m seeing of shared and split pain, right? And most of the time the people who design the system are trying to separate things to make them sort of optimal. But the moment you split pain, you cause problems. And so to just leave code for a moment, I have a friend Rodney, who unfortunately got diagnosed with cancer last October.

Llewelyn Falco 00:08:41 Like it’s horrible, right? It’s stage four cancer. But a couple months ago I went over to see him and I had really hurt my knee. I enjoy swing dancing and I’d maybe done a little too much of it and, and my knee was just really hurting. Now Rodney would swap me his cancer for my hurt knee like any day of the week. But it was really hard for me to empathize on his cancer while my knee was hurting, right? Because it’s my knee, it’s his cancer, but it’s my knee and I’m always gonna preference my pain. And I see this in companies a lot where it’s like, okay, what is the thing that I need to do so I can say I’m doing my job right? And maybe as a manager it’s like I need to make sure that I can say, look at the test coverage that we have and the developers are saying I need to check that box and so I can commit the code.

Llewelyn Falco 00:09:30 And then what gets lost in that separation is that the tests are supposed to be helping. Like am I open source? I am my boss, right? I’m not writing tests for somebody else, I’m writing tests ’cause they make my life faster and easier, like I’m solving my pain. And the moment that that gets separated and the pain is created one way and and solved somewhere else like that gets us into a lot of trouble. I see this in DevOps, right? Like there used to be dev and ops and the dev would write bugs, but ops would have to deal with them on the weekend and people are like, oh, we should take that split pain and put it together and we’ll create this thing called DevOps. And now the developers who write the bugs are also the people who are responsible for deploying and, and now that pain is shared, we start solving it.

Llewelyn Falco 00:10:15 And then companies who are like, oh, this DevOps thing seems really good, we should make a DevOps team. And now you’ve separated the pain again, right? So a lot of times I see teams that are writing tests because they’re supposed to, but they’re not doing it to solve their own pain. And if you’re writing tests not to solve your own pain, those tests are not really helping you. It’s not like you’re lazy or, or have bad intent, it’s just the dynamics of a system. You should be solving pain that you feel. And people who are like, and there’s a lot of advocates for test first or test driven development and all those people who are really like advocating it, it’s because they’re using tests to solve their pain. And I see that, but I don’t see it that often because I’m a consultant so most of the time I see places that are writing tests to solve someone else’s pain and those tests tend to not be very good.

Sam Taggart 00:11:07 So to bring us back around to approval testing, how does approval testing help you write tests that solve your pain?

Llewelyn Falco 00:11:13 Well, okay, so for my pain , so, so I’m gonna take this from a good place, right? So testing in general, right? Like you’ll see three different things. The most common thing I think is: arrange, act, assert, right? That came from the original unit testing place. And the idea here was like you arrange some stuff, you act on some stuff and then you, you write this assert and the asserts were always sort of check that one is is the value or check that the name is Sam or you know, some very primitive data but like spot checking and then BDD came along, this was from Dan North and he was like, the words are very important and I, and I don’t like this arrange, act, assert, so we’re gonna use given, when, then, and it maps right directly to arrange, act, assert and it still has the same, we’re gonna spot check these, these little places.

Llewelyn Falco 00:12:02 With ApprovalTests, it moves to just you do something and then you verify, right? So I’m gonna do something that’s my action of the test and then I need to verify the result. And honestly if it is just a simple number or a very small string, I still use asserts like they’re great. But very often it’s not like very often it’s like I wanna set something up and then I want to validate this customer or I wanna validate this transaction or I wanna validate the received call, right? Like this JSON object. And so ApprovalTest allows you to validate more complicated things and it does it by printing the result. So if it’s JSON, you’ll usually just sort of pretty print it. Sometimes it will filter like the printing is important, right? So like maybe you have stuff like timestamps in there, right? And like those are not gonna be consistent.

Llewelyn Falco 00:12:54 And so you’ll like, you won’t print those or you’ll filter them out. Or goids are are another like thing like that, right? So you print the thing you care about and then you save it to a file and you look at the file and you say, oh this is what I want. And then you save it and those files actually get saved. So we’ll hear this go by approval testing. I’ve heard it called golden master testing, but actually the thing I’ve heard the most now is snapshot testing, right? And that’s because Jest is a very popular framework and it uses a similar type of mechanism.

Sam Taggart 00:13:26 So basically you form some action and you take a snapshot of the data that that action returns, whether it’s an object or some hierarchy or a collection or whatever it happens to be. And then what do you do with that data once you get it, you store it to a file?

Llewelyn Falco 00:13:42 Store to a file.

Sam Taggart 00:13:44 And then the next time you run the test, what happens?

Llewelyn Falco 00:13:48 So one of two things, either it matches or it doesn’t match. If it matches, nothing happens like your test pass, there are no interactions, everything is happening. But if it doesn’t pass, now we need to ask some questions like why doesn’t it pass? Right? And so depending on how you’ve printed that object, it would be nice to get some assistance on seeing that, right? So by default Approvals has a thing called Reporters. And Reporters are, they come from a tool I used to use a lot when I was on Windows called Slick Run. So SlickRun was like this little runner that would just sit in the bottom and it would launch programs, right? And it was really, really powerful. And to me growing up in a Windows ecosystem, SlickRun was the thing that let me see the power of combining different pieces of software.

Llewelyn Falco 00:14:43 I think people who grew up in Linux, they got that on the command line, right? Because it’s really, really common to do something and pipe it into another command and command, right? That’s just how Linux works. But in Windows, it doesn’t work that way. But SlickRun was the thing that made me realize that. And so in Approvals is the same thing. So let’s say that I’m writing a piece of HTML, right? I could hand it off to a reporter that opens up a diff tool, right? Like maybe beyond compare and it shows me here’s the old HTML and here’s the new HTML and then it will zoom in because that’s what Beyond Compare does and show, hey, these three lines changed, right? Those values that used to be here are now here. And then I can look at that and say, Hey, that’s cool, I like that, but maybe that’s not what I’m interested in.

Llewelyn Falco 00:15:28 Maybe I’m interested in how this page renders, right? So maybe instead I’ll report by launching it out to Chrome and it will actually render the webpage and I can say, oh yeah, that looks good. Or you can do like a more complicated Reporter, right? Like you could have it run out to a headless browser and take snapshots. So it ends up with P and Gs of how the website works in three different formats, right? So like if it’s on a browser, if it’s on a phone or if it’s on a tablet and now I might wanna pull it up in an image diff comparison. Because when I’m looking at two different images, it can be very hard. I don’t know if you ever played those games when you’re a kid where you had two pictures and it’s like spot the six differences. It’s really hard to do as a human, but as a computer it’s really easy.

Llewelyn Falco 00:16:11 You just use an image diff tool. And so my point being that depending on how I want to understand what has changed, I might use a different tool, and ApprovalTest is set up to open that tool and help me understand it. Once I understand it, I still have a choice because maybe I’m fixing a bug, in which case the behavior should change, right? And so it failed because the behavior changed tests lock behavior. So the old behavior is no longer in play, but the new behavior is what I actually want. So now that I understand the change, maybe I, I’m like okay that’s great, let me fix the approval file, which is just moving it over, just copying the file over. Or maybe I’ve introduced a bug, maybe I’ve unintentionally changed something and now I need to go back to my software and fix that. Right?

Sam Taggart 00:17:03 Now I have a question. You are checking in those approval files into your source code control, correct?

Llewelyn Falco 00:17:09 Yeah. Absolutely.

Sam Taggart 00:17:10 So as you change things, if you accidentally changed more than you wanted, you could go back at a later point and look at those and somehow see that.

Llewelyn Falco 00:17:16 Yeah. So you can do two things. So let’s say you change some stuff and you accidentally make a piece of code hidden that should have been shown, right? Or you make the color wrong. You didn’t notice and you approve the file anyways. So you can actually go through your source control and say that’s where it happened. You can actually see the change. It’s almost like using git bisect, right? You can just do it through history. But the other thing that you can do, is let’s say that you’re doing some UI work, right? We’ve also seen this happen a lot in JSON, right? But in both of these things it seems like they’re simple enough like the pictures of the UI and the way that the JSON is formatted is simple enough that product owners can understand those images, right?

Llewelyn Falco 00:18:07 Or the, or those, those files. And so what we’ll see is product owners going into version control and like literally opening up the image and drawing a red X over like the thing they want to fix and then they’ll just submit it and all of a sudden your test break, right? And now you have a broken test that is a feature request. And then like you’ll run it, it’ll be like, oh wait and it’s hand drawn, right? So it’s never like that’s gonna be the thing that you approve, but it is more than enough information for you to say, okay, I know what I need to fix. And then when you fix it, you just move the fixed copy over and that is like a very nice way of communicating intent.

Sam Taggart 00:18:49 So question then, so my understanding of approval testing was that it was basically taking whatever data you had and flatting it to a string and writing it to a text file. But it also works with binary files.

Llewelyn Falco 00:18:59 That is the predominant way that I do it.

Sam Taggart 00:19:01 But you could also deal with a binary file. Well, okay, so you could have you run it through your printer and your printer renders the HTML, generates your images and stores them somewhere and then that works too.

Llewelyn Falco 00:19:12 Yeah. And in fact you can do a step. So like, I don’t know if you’re familiar with a code retreat or actually I should ask, are you familiar with code retreats?

Sam Taggart 00:19:20 I know what they are. I’ve never been to one.

Llewelyn Falco 00:19:22 Okay, so for the audience, if you haven’t, so this is the thing Cory Haynes started about oh 2009 and it’s a group of people get together in person usually, although that has changed a bit during the pandemic and they spend the day doing a single exercise over and over with different languages, different people and different constraints. And it’s just a way to like, I think of it like a yoga meditation retreat, but for code, right? It’s really nice, a nice thing. But often they’ll do this thing called Game of Life. Game of Life at Conway’s Game of Life is sort of this simulation. So you have a board of cells and the cells will come alive or die based on different rules and has some really neat behaviors. And like for those we will very often use ApprovalTests. And in the beginning we’ll usually start with text and we’ll be like, hey, let’s get a board and let’s get this cell and let’s see how it changes.

Llewelyn Falco 00:20:13 There is just a text file, but it’s like a storyboard, right? So it’s like here’s what the board looks at frame one and here’s what the board looks at frame two. But as we start to get more advanced on this, we’ll actually turn this into graphics and then the approved file is an animated GIF. And you can actually see here’s a board and here’s this very complicated like a hundred sequences of the board growing and dying. And, and that would be really complicated to do. Well in normal unit testing it would be almost impossible to do. Right? Show me how this blob transforms across a grid of a hundred over a hundred. Like that’s just almost impossible to do with the certs. It’s possible to do if you stream it to a text file, but it’s still a lot to understand. You have to move through the things.

Llewelyn Falco 00:21:00 But in an animated gif, it’s super easy to understand and it’s a two line test, right? Because the do is set up this board and then the verify is just verify this board for a hundred sequences. It’s two lines of code. And so because the tests are easy to write, I write them. And because the tests give me insight into what’s going on, I keep them and they help me, right? And there’s that balance of how hard is a thing to write versus how much value is it giving to me. And so even if it gives me a lot of value, you’ll use numbers of value. I’m not sure what a value unit is, but let’s say it gives me a hundred units of value, right? But it takes 150 units to write. Like I’m not gonna do that because I’m lazy and, and it’s now it’s causing me pain. It’s not solving my pain. But if it only costs me two and it gives me a hundred, then I’m doing it. So it doesn’t matter how much protection it’s giving me, right? It matters how much pain it’s saving me from.

Sam Taggart 00:22:00 As I’m listening to this, I’m really trying to contrast it with my traditional unit testing mentality. And it seems like you are taking a step back, like unit testing is very detailed and focused and it’s like I run this function, I get this specific output and I’m verifying one specific value or one specific data type out of the return value of the function. Yeah. It’s kind of taking almost that behavior type approach of the code does this thing and then I’m just making sure that it keeps doing the same thing.

Llewelyn Falco 00:22:29 And a lot of times if you go back to like manual testing, which I’m a huge fan of exploratory testing, but I’m not a fan of traditional manual testing, like manual testing for regression. I’m not a fan of. But exploratory testing. So manually testing code much more as a hacker to gain insight into oh here’s this thing I didn’t know before. I’m a huge fan of that and I actually think more and more as we get AI involved in the code that we generate, the thing that’s going to become really valuable and differentiate programmers, is not their programming skills. It’s gonna be their testing skills. It’s gonna be their ability to say, Hey, chat GPT write me this program and then my ability to actually verify that that program is what I want. Like okay, they wrote this thing, is it what I want?

Llewelyn Falco 00:23:21 And I don’t know if you ever played with prologue at all or any logic programming, but I think chat GPT is moving us in that direction. And the whole idea of this is like you are writing the constraints and then the software is generating the code and that is what TDD is, right? It’s like here’s my tests. Write me the system that solves that. So I think the more you’ve learned to craft constraints that then can validate the better you’re gonna end up with code. And if we stop writing code, we’re still gonna be writing the constraints. Maybe we’re writing them as unit test. I doubt that. I think we’ll be writing them in prompts. But who knows. Predicting the future is hard.

Llewelyn Falco 00:24:44 Okay? So like, let’s take this more concretely to an example as with a fantastic developer, Lata. And she was showing me some tests and it’s for a messaging system, right? So it’s like people call in for customer care and this system sort of interacts with them and gets them to the right operator. There’s two things. One, they were very long, like very often they were 30, 40 lines, right? And I couldn’t figure out, like you’d look at the test and you’d be like, what is this test doing? Right? And then the other thing is there’s a lot of duplication. I am particularly good at noticing duplication. So I’m like, okay, the moment I see lots of duplication, I’m like this is a good place for ApprovalTests. And so as we started to clean this up, we realized that the essence of the test was this conversation.

Llewelyn Falco 00:25:34 So you call in and you’re like, Hey, I would like to pay my bill. And they’d be like, oh, are you, are you a current customer? And the person would be like, yes I am. And they’re like, great, can you give me your customer number? And they’re like, yeah, here you are. And that’s the test, which really meant like when we reduced this into an ApprovalTest, it was verify conversation. Hello, I’d like to pay my bill. Yes, I’m a customer, here’s my customer number. That’s the input, that’s the test. And then the output was this file that sort of showed that conversation. User says this, chatbot responds with this, user says this, chatbot responds with this, right? And it shows that conversation. So the output now tells me the story and the test is really easy. It’s two lines of code or one line of code.

Llewelyn Falco 00:26:24 It’s verify conversation and then here’s the string of parameters that I am giving. And so 30 lines reduces to one line and then there’s multiple tests, right? So multiple 30 line tests. So now instead of pages and pages of hundreds of lines of tests, we now have 20 lines of test. And you can very easily scan all 20 tests and be like, that is the conversation that’s causing me pain or I need to write a new test. Here’s the conversation that’s causing me pain. And then when you look at the approval file that’s associated with the test, you can see the conversation, you can see, yeah, that’s what I want, that’s how it should go. Or that’s not what, here’s where it went wrong, right? Because it’s a flow of conversation.

Sam Taggart 00:27:08 My immediate thought with that is, is the chat bot deterministic enough that it always outputs the same thing? Like it always says hello the same way because I can see that potentially causing problems.

Llewelyn Falco 00:27:18 Pg, but yes it is. However, your point is well taken, right? Like how do you test something that’s non-deterministic?

Sam Taggart 00:27:27 Yes, that’s a very good question. Yeah, I think that that can lead us back around. You had mentioned printers earlier. Maybe we can delve into that. Yeah. A little bit more.

Llewelyn Falco 00:27:33 Well so the printer is definitely one of the ways, right? So it could be the chat bot or let’s say each of the chats has the time associated with it, right? Well that’s gonna be a disaster. So you could filter that out, right?

Sam Taggart 00:27:46 When you filter that out, do you replace it with a token that just says date time, or do you just replace it with nothing? Or how do you typically do that?

Llewelyn Falco 00:27:54 So it is actually up to you. The filters are very robust. But the standard one, the out of the box one actually replaces it with a token and a slightly different token than you might expect. So it will say date and then the number of the date. So date one, date two, date three. And the reason for that is it’s easier to think about in goid. So let’s use it with goid. So let’s say that you have a piece of JSON and it has like a couple goids in there. Let’s say it has five goids. Yeah. Right? I don’t just wanna see goid, goid, goid, goid. Because maybe of those five goids, three of them all are pointing to one, right?

Sam Taggart 00:28:31 Ah, yeah. Yeah.

Llewelyn Falco 00:28:32 And the fourth one is pointing to something else, right?

Sam Taggart 00:28:35 So you wanna know where it’s pointing.

Llewelyn Falco 00:28:37 So what it will do is it’ll say, okay, I found a goid, it has this value. That would be goid one, I found another goid. Oh it has the same value. That would be goid one two. I found a new good, oh that has a different value, that’s goid two. And that way you can actually see the relationship between them. And it’ll do the same thing with dates, right? So if you have the same date in seven different places, they’ll all be replaced by the token date one’s if you have seven different dates. And that turns out to be really useful. So that does it on the filter side and, and we use that a lot. But the other thing we do is we’ll say let’s restructure the code. So the thing that I notice the most about people who do test driven development versus people who do test after, is that the code is more testable, right?

Llewelyn Falco 00:29:26 Because if you only have this block of code, then it’s like ugh, I just need to get it tested and I will do these very complicated things so I can test it. But if you’re like, I need to test this code before I write it, then I’ll be like, maybe I restructure my code, I cut it. So it’s easier to test. What it means in practice is often a lot of little methods that call other methods, right? So maybe I have a method that’s like generate conversation and it will say, oh, call generate conversation, pass it all the same arguments, but also pass it today’s date. Yeah. And then I can call it and pass it the date that I want. And now the date is consistent. If we’re doing stuff like a stable diffusion testing, we’ll very often lock the generative seed, right?

Llewelyn Falco 00:30:13 So if you lock the seed you can get a consistent generation. But by default it doesn’t. And so sometimes code is nicely structured so you can just say, okay, here’s how I want the random number generator to be. And sometimes there’s like 20 different places where it asks for a random generator and you’d have to touch all 20 of those places, right? So the more that you start doing this, the more you make your code easy to test. And actually I have this whole saying of like, I don’t want my developers to get good at testing hard code. I want my developers to get good at making hard code, easy to test.

Sam Taggart 00:30:48 Yeah. The analog I always draw to this. So I’m an electrical engineer and this is put on by the IEEE. So they’re all electrical engineers. Uh, like if you build a circuit board right? And you have a circuit board, the stuff on the outer layers is really easy to access when you’re going to test it. ’cause you can touch a multimeter, you can read it, but if something’s hidden in the middle, then it’s basically doesn’t exist. You can’t directly access it. And so part of it is thinking ahead and adding in those test points. I think of it the same way instead of having one method call. You’ve got one method call that calls a bunch of other methods and you can kind of pick out the pieces that you want to test.

Llewelyn Falco 00:31:21 Well, and there’s two parts of that. One is like exposing it so that it’s easy to get to. Right? And the other is like exposing information about it, right? So like very often I’ll see code that has a lot of setters but no getters, right? I’m going way back here, but when we had flip phones and we used to write things in J2ME, right? There were ways that you could set things on the screen but you could never ask the screen what it had.

Llewelyn Falco 00:31:49 And that made it really hard to test. Like I just put a red pixel here, is it actually red or I just changed the font? What is the font? But you couldn’t ask for the font. You could set the font but you couldn’t request. So you can tell they didn’t test that when they wrote it , right? Because if they did, I could have solved my pain, right? But they didn’t even solve their pain. So very often the ability to ask your object, Hey what is your current state? Right? That seems like a no-brainer, but often it’s not there. And then with ApprovalTests, we also usually add, let me print your current state, right? Which for most objects is a two string because a remarkably large amount of objects don’t have two strings. Or they have the default two string, which is utter garbage in most languages and especially in Java.

Llewelyn Falco 00:32:35 But in most languages the default two string is pretty bad. It would be nice if the default two stringing was JSON. Yeah, I would love the default two string of my objects to be JSON. That would be really helpful. And these will show up sometimes in your logs and stuff where you’re like, oh, it’s really hard to understand that. One of the things that I see show up with ApprovalTests is I very often will make printers so that I can test my objects, but then sometimes I’ll need to get to a state. So let’s go back to that Game of Life, right? Like I might make it so I can play with stuff and then get a state that is useful. I would do this. So we mentioned exploratory testing. We did a version like if you look at Conway’s Game of Life online, there’s like a million examples of really cool stuff.

Llewelyn Falco 00:33:18 But we were doing a version of it where instead of having square cells, they had hexagonal cells. And they were like, oh we, we would like some cool situations for hexagonal Conway, but there’s nothing on the online for that, right? So we’re like, okay, well how do we find interesting things? And so we wrote these tests and the way they would work is they would randomly create a board, right? Here’s a hundred by a hundred board and I’m gonna put 10 cells randomly on it and then I’m gonna run it and then I’m gonna look for some property, something that I call interesting. So maybe interesting is a thousand turns later, there’s still life on the board that didn’t die out . Or maybe the thing is every 10 times the board repeats, right? Or every 12 times 12 is a nice number, right? Because it could repeat by 1, 2, 3, 4, 6.

Llewelyn Falco 00:34:09 Yeah. Right? So we like, oh is this a thing that is repeating itself? So it just generate like hundreds of thousands of these and then look for this trait. And if it found it, it would print out, okay, here’s the board that got us to this. Well then I need to take that printout and turn it back into code. This is a concept that like shows up in Python, which I really like. So Python has the concept of a two string, but it also has something called repr. And what repr is, is the string of Python that’s needed to reconstruct this object. So like, let me generate the Python code, take this, paste into a Python shell and now I’ve recreated the object. And so this cycle ends up showing up in approval testing a lot where it’s like I can use ApprovalTests to verify my state, but then I can capture that state as starting points for other tests.

Llewelyn Falco 00:35:03 And Jason does this naturally, right? Because usually I’m using some other tool to generate the JSON anyways. And that same tool will take JSON and turn it into the object. So those are things I get for free, but sometimes they’re more complicated things that I need. It doesn’t really matter. The point is that my ability now to play with my software and say here’s a scenario I can draw on a whiteboard. How do I test this, right? And then how do I actually get the code to do what I want it to do and see? So I can see that when I do, and often when I see it I’m like, oh uh, yeah, that’s not actually what I wanted , right? So when I see that HTML show up, I’ll be like, okay, yeah, that is what I was trying to do, but that’s not what I want. And then I’ll, I can change it and I can see it again. Right? So one of the patterns that we saw with the test is these four properties, right? Which I call specification, feedback, regression, and granularity. Specification is so important to programmers. Like it’s knowing what it is we’re trying to build. And it’s really hard to build software if you don’t have any kind of specification. Build me some great software.

Sam Taggart 00:36:11 Now does that specification come from the end user? Does that come from the business? Is it something the developers create or all of the above?

Llewelyn Falco 00:36:20 Hopefully all of the above. However, if it gets to the developer and it has not yet been created, like the buck stops there, right? So if you ask me to do something and I can’t draw you a scenario, I have two choices. I either figure out how to draw or I go back to you and we draw it together. Both of those are valid answers, right? Like there’s things where it’s like, okay, let me play, oh yeah, no this makes sense. Uh, and there’s other places where it’s like I can’t do it. Let me go back and get it. And there’s also, and I think there’s a third valid which is I did do it, but then I go back and check just to make sure that this thing I did is actually what you were saying, right? Because sometimes your version of red and my version of red are not the same.

Sam Taggart 00:37:02 Well I think that should always be part of the process, right?

Llewelyn Falco 00:37:06 I would recommend it. I highly recommend it because again, shared pain, right? If it’s split, then we have problems. By the time it gets to the developer, I need to be able to draw a scenario for it. In fact, this is a long time ago, but we were creating a web endpoint, right? And it was returning some XML and it was me, Lane and unfortunately I forget the third person’s name, but we were in the office and they were talking and we’re in this meeting and la la la la and they’re like, okay, I think we’re on the same page. We got it. And I was like, maybe we just draw a sample of what the Xml would look like. And they’re like, ah, we don’t have to do that. We understand it. And I’m like, yeah, yeah, but maybe you can do it for me anyways, so they start drawing the XML and as soon as they did it, they went from violent agreement to violent disagreement.

Llewelyn Falco 00:37:51 That is not what I meant as a date. That is not, no, I need to get a list of things, not a single. So when it becomes concrete, you surface this false agreement, right? And that’s the place to surface it. Because if they’re not in agreement, what chance do I have of satisfying them? Once we’re in agreement, then I can do that. So that’s all about specification and that’s just us being on a whiteboard. I think of this as testing, but it’s not what people would consider normal unit testing or anything like that. It’s not even code at that point. It’s just creating the scenario, right? And if you give me requirements, those are a horrible way of translating intent. But if you give me a scenario, that’s a great way. Because people are built around stories, right? And also requirements are fuzzy. I can satisfy requirements in multiple different ways, right?

Llewelyn Falco 00:38:43 You can be like, oh, build me a hamburger, right? And hamburgers basically all satisfy the requirements but go around to like a whole bunch of different restaurants and order hamburger. They are different things, right? They’re all satisfying the hamburger requirement. But what is the hamburger you actually want? And so when we get that concrete scenario, now we’re like, okay, that’s what we want. Then I need to start building it. And from there we move from specification into feedback. And again, whether you do traditional unit testing or not, you are gonna want feedback, right? You are gonna want to know this thing I’m building, does it do stuff? Maybe you’re doing that by opening a rep bull. Maybe you’re actually opening the app and like playing around on it, but nobody doesn’t execute their software and just ships it, right? You did something. You opened it in a browser, you opened it on your phone, you did something to get some feedback that this thing worked.

Llewelyn Falco 00:39:40 And the more frequent that feedback is, the easier it is, the less costly your mistakes are gonna be, right? And also there’s a discovery that can occur. And if you’re interested in the feedback side of this, the person who I think is by far the best in the world is a man named Bret Victor. He had a wonderful talk that you might’ve seen, because it was just so insanely popular called inventing on principle. But he also has a really nice talk called stop drawing dead fish. He gets upset when feedback is not instantaneous. So if it’s like 500 milliseconds later, he’s like, that’s not good enough. Like he wants it to be the moment you touch anything. And a lot of the stuff that we’ve seen in development environments is actually improved because of stuff that he cares about. So he wants this instantaneous feedback, but everybody cares about feedback regardless of how they’re timing it.

Llewelyn Falco 00:40:33 Once I have feedback, I can build this software and I can know that I got this thing to work, but then I have this issue of regression, which is, okay, it worked today, does it work tomorrow? And that’s where a lot of people think testing is a regression like hey, I wanna know if I broke anything. And automated tests are really where regression comes in. Although obviously manual tests also are about regression, right? And then there’s this last piece of granularity, which is the system broke. Why? And just knowing that something is broken is not enough for me. The more I can find out why it’s broken, the easier it is for me to fix it. And so all four of these things are really important and in the TDD cycle they all get addressed, right? But whether you use test first or, or even unit tests, you are gonna be dealing with all of these things.

Llewelyn Falco 00:41:26 Maybe for granularity you’re using a debugger instead, or maybe you’re using logging right to figure out what’s going on. Or maybe you’re using monitoring for regression instead of testing. You’re just like, Hey, for some reason we didn’t make any sales yesterday. There’s a canary for supermarkets, which is bananas. So it’s like if they haven’t sold a banana in half an hour, something is wrong at the store. , right? Because like bananas are just a thing that people buy and they buy a lot of them frequently. And if for some reason you go a half hour with the store not selling a banana, something’s wrong. So we use similar things in software. We’re like, hey is the server up? Like can we do a pinging? Can we do a health check? Are we not making any sales today? Or are we throwing a lot of exceptions?

Llewelyn Falco 00:42:10 All of this goes to monitoring and all of it is good stuff, right? But sometimes it’s the only way that you’re doing it. And then feedback is the same. Okay, maybe you’re not doing this with automated tests, but if you can write the automated tests very quickly, you can get all four of those things cheaply and with approval tests it can be cheaper because you can pull out that duplication, right? First of all, if we think about the game of life scenario, it’d be hundreds of asserts to validate what is just a simple animated gif without the same level of insight. It’d be very easy for me to flip, oh this should be at four five, but it actually should have been at five four, right? And I wouldn’t notice that in the thing, but I’ll, I’ll easily see it when it prints this graph out. The other thing is when you do have a lot of spot checks, you do these acrobatics which is like, okay, I really wanna check that this cell went to this cell. And so what is a scenario where if I just check this one piece, I’ll be able to tell it.

Sam Taggart 00:43:16 You create that contrived scenario.

Llewelyn Falco 00:43:18 Scenario. Exactly right. And so with ApprovalTests, I don’t have to do that. My scenarios are much more what a business owner would think. Like, so going back to your life.

Sam Taggart 00:43:27 They’re more like use cases, right? Because I’ve done lots of things where you do an image processing and you narrow it down to like, okay, what’s the smallest square image? Like three square pixels that I can do or nine square pixels I can do this thing. But really what you want is when I process the whole image, this thing happens, right?

Llewelyn Falco 00:43:44 Yes. And with an approval test that is literally verify image and then here’s my starting image, what happens when I run it through? Yeah, this image processing and then you get the image back out and you look at it and as a human you’re looking at it and you’re like, oh I like that. Let me approve it. That’s very easy to recognize. It’s very hard to define. Image processing is really, really powerful for this because how would you do it with just a cert? Like it’s almost impossible. Same with sound processing. We were testing automated voice, right? So here’s some text, it’s text to speech. So here’s some text I want to verify the sound file that came out. That’s really hard to do with the certs, but it’s very easy to be like, it’s a one line test verify text tope, here’s the text, it’s gonna create the sound file and then it’ll open it up in VLC and let me actually hear the sound file and I’ll be like, oh yeah, that sounds pretty good . And then I’ll approve it. Although you know, like again, going back to Reporters specifically for text, for speech, when it would change, I would run it through a Reporter that took a sound file, turned it into the graph, and then capture that as an image and then open it in an image diff so I could see how the actual wave file changed, right? Like I wanna see how the wave changed because yeah, it could be really hard to be like, I don’t understand, those sound very similar to me. What would actually change?

Sam Taggart 00:45:11 I could see for that too, doing like a correlation or something, right? Like taking the data and actually doing like the engineering, like the cross correlation and seeing like how similar are these two wave forms and then maybe just having a value as long as like as long as they’re within a certain amount, like it’s okay, it can shift a little bit. Because I imagine text to speech is probably not a very deterministic

Llewelyn Falco 00:45:31 No. So those, we’d make it deterministic.

Sam Taggart 00:45:34 Is it like a random seed or something?

Llewelyn Falco 00:45:36 Yeah, exactly. I want it to be deterministic. So I will actually put into my test so that they are deterministic. But that part of, okay I changed it, how did it change? Am I okay with this change? That’s the part where I want the insight and, and likewise, so I mentioned the wave files and stuff, but another thing I do really commonly is I’ll just print it as a CSV file, right? And so now I have a text file, it’s a string, right? Uh, it goes really nicely into my source and stuff, but when things go wrong, I’ll open it in Excel and actually start turning it into graphs and stuff so I can see is this actually what I want? Right? That can be hard to know if I got a list of like 200 numbers, is it what I want? Can be a complicated question. So I’m a programmer so I want to use tools to help me get insight as to is this what I want?

Sam Taggart 00:46:27 Yeah. So I think we’ve done a very good high level kind of explanation of testing and how approval testing fits. Let’s spend the last couple minutes and go into some more details. What languages and frameworks and stuff does approval testing support?

Llewelyn Falco 00:46:41 So the answer is a lot. , we’ve been talking about testing, but another part of my life is pair programming and mob programming. And so I sort of grew up in the Java and C# world, right? Like those were sort of my languages. And so I actually, the first version of ApprovalTests I ever made was in Java. And then quickly the second version was in C#. But the thing is, because of, well in the beginning pair programming, people would come to me and be like, oh this is really helpful. I would like it in Python or I would like it in C ++, or I would like it in Swift. And I’d be like, okay, let’s pair on that. I know ApprovalTest really, really well and you know your language really well and languages are two parts, right? There’s the language and there’s what I would call the culture of the language.

Llewelyn Falco 00:47:31 This shows up in multiple ways. One way is Claire McCrae, who does a lot of the C ++ approvals with me very often. I’d be like, oh I wanna do this thing. And then we’d write this code and we get the test to pass and then she’d sort of shake her head and be like, oh Llewelyn, that is not proper C ++ , right? Like you, you have somehow like you got it to work, but this is shameful , right? And then she would show me how to do it in idiomatic C ++, right? Similar thing would occur in Python. But, and also like I remember helping my friend Scott, he was a C# developer and he’s like, I have a Java project and I’m just having a lot of problems with it. And so we open it up and I’m like, oh it’s a Maven project.

Llewelyn Falco 00:48:11 And he is like, how do you know that? And I’m like, oh it has a palm file. And he is like, what could I have Googled to figure that out? And I’m like, I got nothing. It’s super obvious because I’m in the Java culture, but how do you Google that? Right? So there’s a lot of things that you get that just because you’re in the culture. So when I would pair with people, they would bring like their knowledge of the language and, and more important than their language knowledge is their culture of the language, right? They’re the ones who’d be like, okay, this is what idiomatic looks like in Swift or this is what idiomatic looks like in Python. And so I would pair with all these people and as a side effect, approval tests is in quite a lot of languages and written by me and whoever the person is, I happen to pair on that. Our current list, let me just pull this up because I keep forgetting. So we got it in Java, we got it in, we have it in C ++ we have it in PHP, we have it in Python, we have it in Swift, we have it in JavaScript and in TypeScript we have it in Peal, we have it in Go, we have it in Lua, we have it in Ruby and we have it in objective C. Sam Taggart 00:49:16 How do you maintain feature parity across all of those? Or is it just kinda like whatever feature somebody needed in that language? We implemented it and the other ones we just haven’t got around to yet.

Llewelyn Falco 00:49:27 So there is definitely a bit of that, right? Like they are not all equal. And again, a lot of this comes from who I’m pairing with. So some of the best documentation is in the C ++ because Claire really cares about documentation and we spend a lot of time writing documentation, but Claire really cares about documentation. And so I’ve worked with Claire for many years now and so there’s a little Claire on my shoulder when I’m in another language that still cares about documentation. It’s not as good as having Claire there, but documentation gets raised on all the projects because I get to take a little bit of the people I work with with me. And so very often, you know, and because I would have, I’ll have standard meetings that set up a week. And so on Sundays I’ll be working Python, and on Mondays I’ll be working Java.

Llewelyn Falco 00:50:14 And then on Tuesdays I’m working in Swift. And what will happen is we’ll develop a feature in Python that’s nice and useful. And then, so on Monday I’ll be like, oh, hey Lars, let’s do this feature and then we’ll do it. And, Lars will show me an insight and I’ll be like, oh yeah, that’s better. And then we’ll go to Swift and I’ll be like, Hey John, let’s do this feature. And, and then we’ll do it and John will show me another insight. And now, so like, now the Swift version is actually the better version, right? Because it’s the third time I’ve done this feature. So then we come back on Sunday and I’m like, we need to fix, we need to fix what we wrote last time. ’cause I’ve learned all this stuff in the iteration, right? So some of the feature parity is just happening because I’m doing iterative development with different people and I just did something and so I wanted to move.

Llewelyn Falco 00:50:57 And a lot of features actually do traverse that way. And it gives a lot more consistency than you would normally see in ports, right? Because there’s that shared part. Then the other part of it is ApprovalTests. Unlike normal testing, remember I said like, here’s how you verify the frames or here’s how you verify a conversation because there’s that duplication that you can remove, right? Because here’s the scenario and I’m gonna show you the printout of that scenario. This duplication shows up. That doesn’t show up in normal testing frameworks, right? You’ll see a little bit of this in Cucumber where you’ll have a custom comparator, but it doesn’t show up much. And you’ll see some of it in our spec just because you can pull up our spec looks like functions, but it’s actually not, it’s, it’s Lambdas.

Llewelyn Falco 00:51:44 So you can pull like a for loop above it or stuff. You’ll see a little bit of it, but you don’t see a lot of it. But in ApprovalTest, you see a lot of custom verify functions. Give me this thing, I’m gonna run this standard process on it and then I’m gonna verify the result. And so for Swift, a very standard thing is I have a screen or a portion of a screen and I want to verify the way it looks on an iPhone. That’s really a common scenario in Swift. And so here, give me a component and I’ll verify it. There’s no corollary for that in Python, right? So that isn’t gonna transfer over. So all the custom verify stuff is, really by language. Those don’t have parity. But the general architecture and structure, those do have parity. And I mentioned we’ve been doing documentation and very specifically we’ve been using this thing from Daniela called the four quadrants of documentation.

Llewelyn Falco 00:52:35 And it’s really, really helpful. And so the basic idea is that when you are writing documentation, there are four audiences that you are talking to, right? One is a tutorial. Tutorial is like, I’ve never used this thing before and I need to get it to hello world. Those things are ridiculously complicated to write. They’re very, very detailed. They are, I am gonna hold your hand at every step. And no matter what, when we are done with this, this thing is gonna work , right? Then there’s how-tos, how-tos is basically all of stack overflow. It’s like I have this problem, here’s how you solve it, right? So it’s problem, recipe for solution. It assumes you have a fair amount of understanding of the language. It’s not handholding you like a tutorial at all, right? But it’s very specific problem focused. Then there’s reference. Reference is like here’s all the stuff the API does, or here’s all the different ways you wanna see things with a Reporter.

Llewelyn Falco 00:53:31 You know, a lot of times they’re links, right? Or just information dumping. Really, really useful while you’re programming to be like, Hey, what are all the methods in this class? Or what is this part of Reporters? What, what relates to that? But it’s like reading a dictionary to learn a language. They’re helpful if you want to know what a word means, but they’re not helpful in general . So references is complete and it’s links, but it’s not, it’s not really trying to teach you anything. And then the last is this explanations. And, and the whole reason I’m telling you all of this is to get to explanations. So explanations a lot of times will talk about the whys of stuff, maybe the history of stuff, the architecture of stuff, right? And what we found is those other three categories are language dependent, but this explanation category that is a lot more cross language.

Llewelyn Falco 00:54:24 It doesn’t matter. The architecture is the same whether you’re in Swift or if you’re in Python or if you’re in Lua. Like it’s all the same architecture, right? And so as we started to write more of these explanation pages, we’ve started to see a lot more consistency emerge among the different languages because we’ll be like, oh wait a second, but that doesn’t quite apply to JavaScript. Let’s go fix that. And that’s one of the things I found with writing documentation in general test. Okay? So like, and when I started out, I didn’t write tests. I didn’t even know what unit tests were, right? I tested my code, of course, like I’d run it, I’d look at it, but I didn’t do any kind of automated test. And then when I got to automated test, the thing that was amazing to me is you are now using your code.

Llewelyn Falco 00:55:10 We talked in the very beginning about shared pain, right? So as I tried to test my code and it’s hard to test, I’m like, ouch, let me fix that. So it’s now easy to test, right? And so now I’m like the first user of my API and so that’s good, right? But the thing about test is it’s always an expert user. And expert users all look the same. They’re knowledgeable, like they, they get it right? When you’re writing documentation, you’re usually not writing it from the point of how would an expert use this code? You’re writing it from, how would a beginner learn to use this code? And there isn’t just one beginner, there’s multiple beginners, right? When I wrote tests, I had empathy for the people who are using my code. But when I wrote documentation, I have empathy for the people who are learning to use my code.

Llewelyn Falco 00:55:57 That’s a very different person. And what I found is my ego would come into play the moment I had to document a bad process. So I would be very happy with like a 20 step install. But the moment I had to document that, I’d be like, okay, let’s write a script that’s just install , right? Like I don’t wanna have to document something that’s crappy. I’m willing to do something that’s crappy, but I don’t document the moment that I have to document something bad. I will fix it. Because my ego comes into play and it’s like, no, don’t say the architecture looks like this except for in JavaScript or it’s crappy. Like go fix the JavaScript. So I can just say the architecture looks like this, right? Or don’t say like, oh but in Swift yet no, go fix that. So you don’t have to say that. And so documentation got my ego to start coming into play for good. It’s a very valuable thing.

Sam Taggart 00:56:48 I definitely experienced that recently. Because I had to hand a project over to somebody else and documenting everything because it was me. I didn’t take the time to document everything as well as I should have. And writing out some of the stuff, it’s like, oh man, you know what? Before I hand this to somebody else, I’d better go fix that thing that I’ve been living with all this time and I’ve been okay with it.

Llewelyn Falco 00:57:07 And, and there’s this other thing that’s come into play with documentation that is right in board with Tess, which is there’s a tool that Simon Crop made called MD snippets. It’s a really simple little command line tool that allows you to put just a little token in some markdown and then run the tool and it will expand it by grabbing the code out of your code base and filling it in. And what that means is my code samples are all now coming from my unit tests.

Sam Taggart 00:57:33 Ah,

Llewelyn Falco 00:57:34 And that means they stay up to date when I rename. So it means if you grab a code sample, it actually works. Yeah. And the way that markdown snippets work, it also puts a link to the code. So sometimes you’re like, well should I fully qualify the names or not? Like how much information does the person need or how much is just clutter? And I no longer have to make that decision, right? If they need more information, they can just click the link, it’ll take ’em to the exact file and they’ll be like, here’s the whole situation. Yeah, right? But here’s just the piece that I think you need. And so markdown snippets or MD snippets allows me to tie together my tests and my documentation much closer and I end up writing tests sometimes to fulfill my documentation, right? But my tests are making sure it still works and that will sometimes go rather extreme, right?

Llewelyn Falco 00:58:20 Like sometimes I’ll write tests that say like, because I can do this with approval tests very easily. I’ll say just reflectively look at the code base and give me all the calls that start with the word verify. And then I’ll get a list of here are all the verifies that are in approval tests in their signatures. And then I’ll go to my documentation and say Okay, here’s the list of all the calls. I’ll just take the output of that approval file and include it into the documentation. And then when I write a new verify function, like okay, now we’re gonna verify like a chat call. Then that shows up in my documentation automatically. Because my tests detect it, my tests change, it shows up in beyond compare. I’m like, oh yeah, I did just write that function. Let me move it over. I approve it, it gets committed, my actions and my CI kicks in and runs MD snippets and says, oh this part has changed automatically updates all I get all this stuff for free.

Sam Taggart 00:59:11 So yeah, that’s really great. So we talked about documentation in new users. So how would new users get started using approval tests? Like what’s the first step?

Llewelyn Falco 00:59:20 So there’s two scenarios for new users because remember I said all new users are different . But the two big categories are I have an existing project that I wanna add ApprovalTests to, or I just wanna play with ApprovalTests. I wanna start with the user who’s like, well this sounds crazy, let me try it out so I can verify that this was a dumb idea. The way I would do that is go to, click on the language, it’ll take you to the GitHub page there and then all of the ApprovalTest projects have a starter project. So click the link to the starter project and just clone that project. And it’s a very minimal project in that language with like one or two tests. And that will allow you to go from like there to, okay, I am now using it in like a minute or so, right?

Llewelyn Falco 01:00:05 So just clone the starter project, open it up and play with it. You wanna play with this. That’s the way to start. If you want to add it to existing project, that’s gonna be language specific. But in general there’s a package manager, right? So Swift has a package, well Swift doesn’t have a package manager per se, but you can easily add a thing and it points to the GitHub repo and the line is in there. Python has PyPy .net has new git, Java has Maven, C ++, is an ecosystem that does not exactly like you . So there are two package managers, there’s Conan and vcpkg, it’s in there. If you’re using those, I would just add it. That’s the easiest way. But often what I find in C ++ packages is that you’re just including the single header file.

Llewelyn Falco 01:00:52 So we have a download of just a single header file, you can add it to your project. Either way you’ve now added it to your project, then you just call approvals.verify and you pass it your thing, right? And most likely when you do this the first time, you’re gonna get some kind of garbage, right? Because like I said, a lot of two strings don’t there. So maybe you wanna actually start with verify as JSON and pass your object there and then you’ll get it as JSON and in the beginning, that’s where most people start. So you can think of, I’ve got this array of things, it has 10 elements, it’s gonna be really annoying to write an assert for that. Let me verify it, right? And then it’ll print out your thing and you’ll be like, okay, yeah, that works. Make sure you have a diff tool that’s gonna be really helpful.

Llewelyn Falco 01:01:36 Approval test will automatically detect it on your system. And it’s fairly robust. So if you have a diff tool installed, it’ll probably do it VS Code works as a diff tool. So you probably at least have that installed as does stuff like the Jet Brain suites, so IntelliJ, that kind of stuff. So it might just pop up in that. So start there and then you’re gonna start with the printers. So you’re gonna start saying, the JSON isn’t really what I care about. Let me show the state of this object better. And so, then you’ll start writing your custom printers and then you’re gonna start writing your custom verifies. And that’s sort of the sequence you’ll take, but you don’t have to add anything other than the approval testing. It works with your current test framework. You don’t have to stop using asserts. It plays very well with others.

Sam Taggart 01:02:23 Very cool. So the website for everybody to check out is And that’s the main place to get started. So thank you so much for giving us this great tour of testing and coding philosophy and ApprovalTests and I feel like we kind of covered a lot of ground. Alright, thank you very much. This is Sam Taggart for Software Engineering Radio. Have a nice day.

[End of Audio]

More from this show