Search
Brian Okken

SE Radio 516: Brian Okken on Testing in Python with pytest

In this episode, Nikhil Krishna discusses the popular pytest Python testing tool with Brian Okken, author of Python Testing with pytest. They start by exploring why pytest is so popular in the Python community, including its focus on simplicity, readability, and developer ease-of-use; what makes pytest unique; the setup and teardown of tests using fixtures, parameterization, and the plugin ecosystem; mocking; why we should design for testing, and how to reduce the need for mocking; how to set up a project for testability; test-driven development, and designing your tests to support refactoring. Finally, the episode examines some complementary tools that can improve the python testing experience.

This episode sponsored by New Relic.


Show Notes

Related Links

Related SE Radio Shows

Transcript

Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Nikhil Krishna 00:00:17 Hello everybody. In today’s podcast, I have the pleasure of introducing Brian Okken. Brian is the author of the Python Testing with Pytest book. And pytest and Python testing will be the topic of today’s podcast. A little bit about Brian. He is a passionate pythonista who likes to talk about Python and testing, and he’s also a podcast host of his own. He has a podcast called “Test & Code” and he’s also the cohost of the “Python Bytes” podcast, which I personally listen to. It’s a very good podcast you should go check out if you ever get a chance. Welcome to the show, Brian. How are you today?

Brian Okken 00:00:59 I’m great. Thanks for that nice introduction.

Nikhil Krishna 00:01:02 Great. So just to lean into the first thing, so just for everybody, what is pytest and why should I care about pytest as a testing framework?

Brian Okken 00:01:14 Well, okay, so first you kind of answered the first part. It is a testing framework, and hopefully you care about testing your code. You know, sometimes we have software engineers that we have to convince that they should test their code. So let’s assume that you know that you should, well, I, maybe we shouldn’t assume that. So I like to be proud of the code I write, and I like to be able to change it. I like to change, like get the first pass done and then be able to play with it, change it, make it something I’m proud of. And a testing framework allows me the freedom to do that because I know once I have all the code working, according to my tests, then I can play with it. I can change it and refactor it, and it’ll still run. So that’s, that’s one of the main reasons why I like using a testing framework. And pytest, in particular, is really easy to start because it’s just little test function. So you can just start right away with just writing test_something as a function and write some code there that exercises your code under test. And that’s it. You’ve got a test, so you can get started really easily, but then you can extend it, and have basically the most complicated test I can think of you can do in pytest. And so you can start easily and it grows with you.

Nikhil Krishna 00:02:29 Awesome, so as I understand it, then pytest has a very simple setup, and it is convention-based. So you do test underscore in front of your file and it’ll automatically pick up that file as a test file, correct?

Brian Okken 00:02:41 Yeah. So the test underscore is both the files and the function names. You can change that you can have different things. If you like to have the underscore test at the end of the file or at the end of the function name, you can change that. But most people don’t; they’re good with the convention.

Nikhil Krishna 00:02:57 Right. So Python famously is a batteries-included kind of language, right? And we know that there is a testing framework built into Python. Could you maybe contrast pytest and how easy it is versus the regular Unittest.

Brian Okken 00:03:14 Yeah. So Unittest is a actually an incredible piece of software also. So a Unittest is the batteries-included version, and partly it’s good to have a testing framework within the standard library so it can be used to test Python itself and the rest of the standard library. But it’s, it’s very different than working with pytest. And if there’s a notion of an X-unit style of test framework and Unittest is one of those. And what that style is, is you have a base class, a base test class that has, and then with that, it’s a naming convention as well. You derive from that base class and then you implement test methods. And then the test framework runs those methods that you fill in, now because it’s a base class you’ve got and you’re working within a class system, you’ve got a lot of that’s where you’re set up and tear down, go is within the class.

Brian Okken 00:04:08 And then also with the assert methods are part of that. Pytest is a lot. One of the big differences is that generally people don’t use classes with pytest. You can, you can even use Unittest as a base class if you want to, but you don’t have to base it on anything. You can put them in classes, but it’s more of a container to hold your test code in. There’s actually a lot of Python developers that are using it every day, but they don’t create their own classes for anything else. So, one of the reasons why I love to have people, especially in that situation, use pytest is because that’s a hurdle I’ve heard from a lot of people of. If I use Unittests, I have to go learn about object orient programming. You don’t really.

Brian Okken 00:04:52 Using Unittest doesn’t require you to know very much about object orient programming, but that’s kind of a barrier for some people. So with pytest, you don’t have to. So that’s one of the big differences. The other big noticeable difference is that the assert method, so pytest just uses the built-in Python assert method. And then under the hood, there are helper functions that tear it apart and make it so that you can see what failed. If when a failure happens, you want to be able to, like, let’s say, if I say assert A = B, if that’s not true, I’d like to be able to see what A and B were and pytest gives you that. Whereas if you try to use Do That, just that normal bear assert with a Unittest, you’ll just get, you know, false is not true, which isn’t very helpful.

Brian Okken 00:05:37 So Unittest got around that by doing a whole bunch of assert methods, extra ones, like assert equals, assert not equals, there’s a whole slew of them. And then pytest kind of avoids that by having some under the hood stuff going on. It actually rewrites your source code for you while it’s, uh, so the getting into the weeds, but when Python runs, it creates byte code. And in that byte code process, pytests can intercept that and turn asserts that are in your tests or there’s other ways to get other code in there, but mostly the asserts in your test, to take those assert calls and interrupt the byte code translation and call these other helper functions, allow it to make a better assert output.

Nikhil Krishna 00:06:20 Awesome. Right. So, I mean, like you said, it’s a little bit lower level and, but I think it kind of illustrates the philosophy of pytest which is kind of like to optimize for developer happiness and developer uses. Right? So it’s kind of very focused on making the usability of testing. Very good for a developer.

Brian Okken 00:06:41 Yes. And then the readability of the test. So when you’re reading a test, you can, and that’s a big thing around the pytest philosophy is to make tests very readable. So we can have just normal asserts like they just look natural in your code and then you’re hopefully getting extra stuff out of your test. So the test is really focused on really what part of the system you’re testing right now.

Nikhil Krishna 00:07:03 Right. So usually even when you kind of try to start testing any kind of non-trivial system, more than like a simple Python script, but even sometimes simple Python scripts as well, you need to do some sort of testing setup and tear down. There might be a database connection to create, or even kind of mock something or, do something with a new API. How does set-up and tear down work with pytest. So what are the concepts there?

Brian Okken 00:07:33 That’s another good comparison with pytest and X unit style frameworks, because an X unit style framework traditionally will have like special setup and tear down. They’re actually called that, setup and tear down, or set class and tear down class and those methods within, and the difference really between setup and setup class is whether or not you call the framework, calls those functions before every test or just before and after the class. So if I’ve got like, say three methods within a class, do I call it once for all three methods? Or so there’s the X unit style is really around like having these hooks that you can put code in for before and after your test is run. The difficulty often comes in with like, sometimes that’s not enough levels. So like the database example that you brought up, that’s a very common one.

Brian Okken 00:08:22 I want to connect to a database and the connect, setting it up, connecting it, maybe filling it with a whole bunch of dummy data so that I can run some tests on it. That’s kind of a costly thing that I don’t really want to do for absolutely every test. So I can set that up once for all of my tests and then any other tests that needs to use that can grab that and maybe reset it to a known state and that’s cheaper than creating the whole thing. So I can maybe roll back transactions or, or somehow reset it to a known state. Now within this two-level thing is possible within X unit frameworks, but you have to take advantage of like class and method level setup and tear down, but it’s a little, you have to kind of do the paperwork yourself, whereas in pytest, instead of you can do that within pytest.

Brian Okken 00:09:10 But the preferred way is to use fixtures and fixtures are a named thing. So you a test that needs the database or needs a clean database can just have a fixture named that like clean database or something. Now that can be in different scopes. So the fixtures can be in different scopes. They can. And by that being, we’ve got function, class, module, package, and session so that you can have like a fixture used by all of your test code. Even if they’re in different files, different classes, wherever they can share the same database connection, that’s extremely powerful. It also makes it so that you can easily build these things up. Because fixtures can depend on other fixtures. So I can have like a string of these and the test itself only knows the last one it needs. It can have more than one, but if it just needs, I need a clean database connection. It doesn’t have to care what all the prior stuff is.

Nikhil Krishna 00:10:07 So it’s almost like Lego bricks, right? You kind of build a hierarchy and then the test basically takes a particular arrangement of fixtures for whatever it needs to be. And another test uses another one.

Brian Okken 00:10:19 Yeah. And the fixture mechanism is really what drew me to pytest. It’s the magic that I was. There’s a whole bunch of great reasons to use pytest, but the fixtures are what really drew me to it because this keeping track doing all the bookkeeping of keeping track of the setup and tear down for multiple levels within your test system, plus within the X unit, it’s really hard to do like something like a session scope, like where for the entire test session, you’ve got one thing, it kind of restricts you within Unittest to be able to do that. That’s difficult, whereas it’s really easy in pytest. And for me, I mean a database connection might be something that a lot of people are familiar with. I also, I do that also with testing software that uses database, but I also test hardware stuff and a connection to a hardware device and setting it into a known state.

Brian Okken 00:11:09 Those are, and maybe even like setting up a wave form for me to test with it. Those are all expensive procedures that I really don’t want to do for every test. They might be up into the seconds to get set up. And then I want to run like hundreds of tests against that without having to do those few second setups between. And that was like no brainer. When I learned about fixtures and how easy they are definitely use pytest. Now the other thing setup and tear down are in X unit style stuff. They’re like two different functions. And they used to be like really early when I started using pytest, they were two different functions also, but they’re not anymore. The more recent versions of pytest and this has been for the last couple years, at least. They have actually like the last five years. But anyway, there’s a yield statement. So your fixture just, you can stick a yield statement right in the middle of it. Anything before, is your setup anything after is your tear down. It looks weird when you first use it, but it’s really convenient because I can put this, right like I can use a context expression even, and have the yield be in the middle of that or in the middle of early anything.

Nikhil Krishna 00:12:17 Even measure of your tests for example.

Brian Okken 00:12:20 Can have something measure it or like things that you’re keeping track of local variables within your fixture. They’re still there during the teardown. So you don’t have to like store it to a global variable or anything like that. So makes it really convenient.

Nikhil Krishna 00:12:35 Yeah. Speaking of data and setting up data. One of the interesting things I found about pytest is the whole parameterization aspect, right? So you can actually set it up so that you write one piece of code and pass in a data structure. And then that generates a whole set of tests that simulate different conditions. So perhaps you could kind of like go into some of how the magic of that happens.

Brian Okken 00:13:01 It’s pretty amazing, really. So like we’re talking about parameterization and there’s several different kinds of parameterization within pytest, but let’s say the normal function parameterization is I’ve got a test function that like, let’s say I set up a user and I make sure the user can log into a system. And that’s great. But what if the user is different type? So I’ve got like maybe an editor user type and a, like an admin type or different roles that I want to test. I can maybe set up all the credential junk that I need for the different roles. I can set that up into a data structure and then pass in an array of different roles to my test and with the parameterization. And then the test runs once for each role. And so in a lot of other, without parameterization I would have like three or four or I would have the number of tests that I have number of roles.

Brian Okken 00:13:54 Whereas with parameterization I could just write one test and it’s not like falling off a log. You do have to like work a little bit to make sure that your test is structured such that it can take a named thing, like user role and know what the structure is and pull that stuff out, set things up. Now I just said, setup, you can do this all in the test. You can say, okay, well for a particular role, I want to let go and log in. And then I want to test whether or not, you know, certain accesses work or something like that. Now, if that setup code is complicated, I can push that whole thing up into a fixture. And instead of parameterizing the test I can, parametrize the fixture. And then the test doesn’t know that it’s being parameterized, but it still looks the same. You can run it it’ll be, be run multiple times now it really gets blowing up really big. If you’ve got a parameterized test and a parameterized fixture that maybe depends on another fixture. That’s also parameterized you can like get a huge number of test cases, really fast doing this. So if you are measured, one of your measures is how many test cases you write. This is a really great way to like blow it up and like beat the record for everybody else.

Nikhil Krishna 00:15:05 Yeah, it’s also a sharp tool, right? If you’re not careful, you can have a common or to really large number of tests, all taking a whole bunch of time in your test suite, just because you’ve made a mistake one in one place.

Brian Okken 00:15:18 But it’s also a great way to, there are, if you’ve got a really cheap test really fast test, you can say, instead of trying to pick which test cases to run, you can just, if you’ve got a fairly small set, you can easily set up an exhaustive test suite that tests every combination, you know, if it ends up being like a huge number, maybe it’s not beneficial, but especially when you’re developing your code, that might be an interesting thing to just try. Now things like hypothesis are hypothesis is a different tool that does like tries to guess good test cases and stuff, and input into your test. And you can use hypothesis with pytest to try to guess good input for, you know, input such that it’ll break it easily. So hypothesis pretty good that it comes with a pre-built in plugin for pytest. So that’s pretty neat.

Nikhil Krishna 00:16:08 Right. So just to dig in a little bit, so hypothesis is a different Python library, but it kind of plugs in into pytest or is this going to be part of that plugin story that we have at pytest?

Brian Okken 00:16:20 It’s kind of both hypothesis is a different category that you can use on its own. You can also use it with Unittest, but it comes prebuilt with some, a pytest plugin as part of it.

Nikhil Krishna 00:16:32 Yeah, but that kind of leads into the other thing about other kind of super bowl pytest, especially now that it’s become so popular is the extensive amount of plugins and extensions you can kind of get for pytest, right? The other day I was looking for something that I was trying to test something that used Redis and I found an entire plugin that basically just faked the entire Redis protocol for you. And you could just plug that in. It kind of made it so I didn’t have to set up Redis server anywhere. I could just do the whole thing on my local machine. So, what kind of magic does pytest do in terms of the extensibility? What is the kind of, maybe overlying architecture of how that architecture, the plugin architecture works?

Brian Okken 00:17:19 Well, there’s some under the hood stuff that I don’t really understand, but that’s okay. I know it extensively as a user would use it. So, we’re talking about, there’s like both, there’s two aspects of the plugin system that are really cool. One of them is it makes it really easy for you as a user to make your own plugin and extend it. So, there’s a notion of like a local plugin. For instance, we were talking about fixtures, like setting up a database and stuff. Like let’s say I’ve got that. I’ve got like a common database that tons of different, like microservices that I have need to access. I can set up a little like my fixtures for how to access it. Normally I can put those really in the test file, or if I’m sharing it across multiple files, pytest has a notion of a comp test dot pie.

Brian Okken 00:18:06 So it’s just a naming convention for a file that’s used for around the rest of the test suite, but it’s kind of also a plugin. So, the comp test file is a local plugin and it doesn’t feel like a plugin. I just have my fixtures in it, but I can package that as a plugin fairly easily. And then I can use that plugin. I can have it that plugin to be its own Python package and I can have different test suites in my community or my job or something. They can all use that plugin and use the same fixtures. So, I can easily create my own create shared code within my own team or, or my own organization. So that’s amazingly helpful. Now there’s a whole bunch of hook functions we can use too. Like I can hook into the, so pytest has a hook mechanism that allows you to hook into different parts of how it’s running.

Brian Okken 00:18:59 So after it collects tests, for instance, I can look at the collection before it gets run and I can maybe modify it, sort it, reorder it, things like that. Now I also in the reporting, like there’s actually just tons of different parts of how it’s working. There’s a hook functions that you can hook in and look, and it’s not trivial a lot of these to figure out how it’s doing this and how to use them, but it’s there. And a lot of people have found it useful to figure this out. So, as you said, there’s a whole bunch of other third-party plugins that have made use of this extensibility mechanism and allow you to do things like I was mentioning during test collection. You might want to reorder them. Well, there’s a handful of like plugins that reorder them for you. They randomize them and shift them around.

Brian Okken 00:19:48 And randomizations a pretty cool thing to do because if you don’t, you really don’t want the order dependencies within your tests. So occasionally shuffling them around to make sure that they don’t break when you reorder them, it’s a good idea. Or like you said, it presents these fixture mechanisms for mocking a database or mocking a connection to a server. So, you can like mock your requests connection, or you can record things there’s plugins to record and playback sessions, and there’s all sorts of stuff you can do with the plugin system. And it’s really pretty easy to set up. It’s one of the things like one of the reasons why I set it in the pytest book that I wrote, there’s a dedicated chapter on how to do this, because when you go with a simple example, it’s easy to see that it’s really not that hard to do. And especially with a local organization, I think it’s important for people to be able to share code even if they never publish on PyPI, it’s just shared within their organization.

Nikhil Krishna 00:20:45 Yeah. I think that’s a great point. Just to kind of go into another concept that you kind talked about there a little bit, which is the idea of mocking. So, can you tell us what is mocking and why is it used? What is its function in testing?

Brian Okken 00:21:01 Well, mostly it’s to make fun of people.

Nikhil Krishna 00:21:07 Yeah. In addition to that?

Brian Okken 00:21:11 Well, so there’s a whole ecosystem around mocking and a whole bunch of words. It kind of gets confusing when you’re in other places, but within Python, there’s a, we usually get our mocks started with the Unittest library. So, there’s a built-in mock mechanism that’s now part of the Unittest library. So even if you’re using pytest, if you’re using mock, if you want to mock something and we get it from the Unittest mock library. But anyway, the idea is it’s part of your system. You want to like, not use the real thing. You want to use a fake thing. And there’s lots of reasons why you might want to do that. Like you said, like a Redis server, or maybe I’ve got a, a access to my customer database or access to a third party system like Stripe and charging credit cards and stuff like that.

Brian Okken 00:22:02 And when I’m writing my tests, I really don’t want to like hit those things. Maybe I do, if it’s my, like you know, my Redis server, if it’s local, maybe I do want to test that. But I can, you know, mock that out and avoid that. So especially if I’m test, if I don’t, I don’t really care about the logic of that logic right now, the thing I’m focusing on maybe is the user interface experience or something else. And I don’t, I want to isolate part of the system away. So, mocking can be done with that. And Python’s like a very dynamic language. So, it’s fairly easy to say after you’ve got a system loaded, Hey, this one piece in here, don’t use that piece, use this, this new fake piece. So mocking is great at that. Now the other reason to use it is like, let’s say, it’s not that I just don’t want to talk to my Stripe server or something, but I also, I want to make sure that the code that’s hitting the Stripe system is doing it correctly. So, mocking allows us to interrogate the calls to say, okay, I’m going to use this fake Stripe system, but when this bit of code runs after it runs, I want to make sure that the Stripe API calls were called at the right time with the right content.

Nikhil Krishna 00:23:14 The right data. So it allows you kind of look into the request that you’re sending to the Stripe API and making sure that that that’s correct.

Brian Okken 00:23:23 Yeah. Super powerful and handy. Yeah.

Nikhil Krishna 00:23:27 So, and this is just a curiosity. So, you said you test hardware, I’m surprised, do you not use mocks for hardware?

Brian Okken 00:23:34 Well, there would defeat the point because I’m trying to test the hardware.

Nikhil Krishna 00:23:38 Ah I’ve often heard that, you know, a lot of hardware testing uses simulation software. Simulation of the actual hardware, especially when it’s pre-production stuff.

Brian Okken 00:23:49 Maybe I usually want to make sure that the entire thing’s working. So, I don’t mock very much, but like for something so mocking is often used for doing these like hitting parts of the system that you don’t want to do. I do want to say, I don’t really like using mocks and I think that there’s an architecture problem if you have to use it. And I would say that the things that we don’t want to hit during testing, that should be part of the architecture known at creation time, we say, Hey, we’ve got a Stripe system. We know we’re going to want to test this system. We don’t want to hit Stripe all the time or we don’t want to hit email all the time. So designing this system, this is, and coming from hardware also, there’s a notion of designing for test or designing for testability and software can do this too to know, Hey, there’s parts of our system that we’re probably not going to want to hit during testing. So how do we verify the rest of the system is working correctly? So, one way for like maybe an email system or something would be designed into it, a switch to say, Hey, turn the email system into, instead of actually sending the email, just log it to an internal file.

Nikhil Krishna 00:24:59 Into an internal file or something. Okay.

Brian Okken 00:25:01 Yeah. And then the test can read, interrogate that and check the contents to make sure that like the sender was correct or whatever, if they want to. And the same with the Stripe server or something like that, you can have like a stub one in place. It doesn’t necessarily have to be like, you know, it can be during debug only, but it also could just be that’s part of your system is to switch it out. The other aspect is they maybe that’s dangerous and we really do want to have mocks, but let’s make sure that like instead of a Stripe system I’m talking to, I would have like a part of my architecture, that’s the payment gateway. And it’s just like one file or one module that it’s the only thing that ever talk to Stripe. And then it’s a known API that I have control over.

Brian Okken 00:25:44 So that thing I can maybe test against a Stripe test database and make sure that that one little tiny API to this payment gateway is working correctly. That actually hits something. But I don’t really change that API at all, ever. And then the rest of the system can mock instead of mocking or stubbing Stripe, I can mock my payment gateway with a fake one. And that’s safer because I know it’s never going to change. Now, there’s a built-in part of the mocking library that some people forget about which is the ability to auto spec where you, instead of just saying, I want to mock thing, if you just say by default mocks, like will accept anything you pass at them. But if you auto spec them, then it will force it to only accept the API as is. So if the API ever changes, then it won’t accept things that don’t match the API so thatís good.

Nikhil Krishna 00:26:40 Right. So I think it’s called spec and it kind of like you can specify the attributes and the, you can put in some values for what is the API like.

Nikhil Krishna 00:27:24 So just to dig into the architectural aspect that you mentioned, I think that’s a great notion, the idea of designing for testing, right? And so if you wanted to go about, if I am a developer, who’s only written like Python scripts, right? One off scripts for automating things and all that. And then suddenly I get hired into a new startup and then they say, Hey Nikhil, weíre going to build this eCommerce website and weíre going to do it Python. And I want you to build this whole thing up, right? So, I’m suddenly staring at this blank canvas with a folder in it and I need to build a project structure. Do you have, I mean, do you have any tips or do you have any thoughts about how we can go about designing a project or even if you have an existing project, how do you actually build it in a way or re-architected in a way that makes it test friendly, especially for pytest testing.

Brian Okken 00:28:20 Yeah. There’s a lot of tips, but first off Iíve got to say, congratulations on your interview skills for landing this job that you’re clearly not qualified for. So kudos, but we can get you there. So one of the first things like going from a normal script. So when I say script often, it could really be anything, but a lot of, so some of the beginning Python scripts that I wrote, there was no functions at all in there. There was no dunder main or anything. There was just code that ran when you ran it. Now, the first thing is don’t do that. So even if you take like the entire contents of that file that you’re used to and stick it in a main method and then have a dunder method, a dunder like, there’s a thing called

Nikhil Krishna 00:29:02 Double underscore. Yeah.

Brian Okken 00:29:05 Yeah. If double underscore name equals double underscore main in a string, it’s a thing Python does to tell your script that this code is running because somebody said, Python, your script name versus importing it. That little switch makes it so that you can both run it as a script, but you can also import it. And now that’s importable I can write a test so I can write a test that imports my module and then runs the main method and hopefully checks the output of it. So code and then maybe, you know, having a 7,000 line file all stuck in one main methods, probably a bad idea. So breaking your code into different functions, different modules is a good thing because then I can test individual pieces. It’s a lot easier to test pieces than the whole thing, actually, it’s not, but if you’ve got chunk of logic is easier to test in a small function.

Brian Okken 00:30:01 One of the things I get a lot of questions of, of like, how do I test this thing? Whether it’s like, you know, this server or whatever. The first questions I need to try to ask somebody is how do you know it’s working? And if you can’t answer that, if you don’t know what the behavior is that’s expected, what the output’s supposed to be and what like side effects happen. There’s no way you can test it, because that’s essentially all testing is, is checking to make sure that the behavior is as expected and the side effects are as expected.

Nikhil Krishna 00:30:32 That’s interesting. So that kind of puts me on mine too. So what is your opinion about test development? Because I would imagine that something which is where you have to write the test first and then write the code that satisfies the test, would mean that you have to think about side-effects and how to tell if something is running up front, right?

Brian Okken 00:30:55 Yeah, definitely.

Nikhil Krishna 00:30:57 So you’re a fan of test driven development.

Brian Okken 00:31:00 I’m aware of test-driven development.

Brian Okken 00:31:04 So I am a fan of test-driven development, but the thing that I call test driven development is different than what a lot of people use. So there’s really like two flavors there’s oh, well there’s lots of flavors. But there’s an original notion of test driven development, which is, using tests to help you develop your system over a course of time. Now then there was this other thing that is focused on testing little tiny pieces, like, and then that’s like the mock driven test-driven development, which is something that developed later. And I never got on board with that, but developing both tests and code at the same time, especially as you’re developing production code, I’m definitely a fan of that, but I’m not a stickler for the test has to be first there’s tons of times where I’m like developing a feature where I’m just playing with it.

Brian Okken 00:31:56 That I don’t necessarily write the test first. I also write a lot of tests that I throw away. So I use testing for just playing. So one of the things that people will often do when they’re developing code is like, if you’re calling like you’re developing a bunch of functions within a module, say I want to call those functions to see what they do. And one of the best easiest ways to do that is to write a test that calls that function to see what it does. And you can even just with modern editors, you can just like select that test file and say, test and say run without a test, you’d have to write a specific file just to call that one function. Whereas with a test you can have like a whole slew of little helper tests just to try things out.

Nikhil Krishna 00:32:39 So I already tell you those. Yeah. I mean I’m reminded of the fact that I was getting into the business and as a junior developer in a .net project, I had precisely this problem, right. I had this thing where I was given a task to do and I had set of functions written and I was like, okay, now how do I actually run this? And the way I did it was basically wrote a main and then debug on Visual Studio, right? And then basically I got, one of my seniors came around like, Hey, why don’t try test in the context, the test framework. And you don’t have to throw away the main function at the end of the day, you can actually use as test. And that was great advice.

Brian Okken 00:33:19 How a lot of people start this whole like if name equals main thing in your module, some people just stick like calls to their code in there and even assert methods or whatever. But it’s just, I mean, it’s not, it’s not maintainable over time to keep those running. So don’t be afraid, especially for exploratory stuff like that. Don’t be afraid to throw those away, or keep them if they’re helpful, but sometimes it’s just, it was just there for me to learn how to, what the problems space looks like. You alluded to. I want to come back to a little bit, you alluded to the fact of, if you’re only used to running small systems and you want to create a bigger system, that’s like a computer science concept of which in like small letters of one of the big tricks that we have as programmers is taking a big problem and breaking the problem into smaller pieces and then focusing on those pieces. Now that’s one of the miracles of testing is I can have my test focused at when I’ve broken things into smaller pieces. I can write the tests around those pieces to say, I think I want this piece to do this. Now I’ll write some tests to make sure that it does that and then I can forget about it and then I can go focus my attention on the different pieces.

Nikhil Krishna 00:34:31 Yeah. But to kind of take the reason why I said that, or rather I brought up that particular question was that oftentimes I have seen in my experience as well, where people would go about without tests or not considering tests, build huge systems that are very connected and couple right. And I’ve always found that if they had started out with tests in, like you said, you know, small pieces that you write test for, and then you write another test for it almost kind of evolves into modular code somehow. Right. I think that’s kind of one of the side effects of having to think that, okay, when I’m writing the code, how do I actually make it so that it is isolatable and testable that you naturally tend towards a model design rather than, you know, building large systems, which kind of like all connected together.

Brian Okken 00:35:21 I’ve heard that claim also. I haven’t seen it as an example, you know, I’ve seen working code that is a mess and I’ve seen messy code that works and vice versa. So I think hopefully if you get used to breaking problems down, you’re going to naturally modularize things. And it also has the advantage of being able to write tests around it. Also, one of the benefits of the tests is that it allows you to rewrite stuff. So, once you’ve figured out a problem, you can look at it and go, gosh, this code is a mess. I mean, I figured it out, but it’s a mess and I can go and rewrite it to where I’m proud of it. And then my tests verify that I didn’t break anything my

Nikhil Krishna 00:36:00 Yeah, absolutely. Yeah. That’s the classic red green refactor cycle. Right? So, the refactor parts comes because you already written the test and you have a framework in which you can change the structure of the code with confidence. So yeah. Which brings another point up. So, there’s obviously the ideal situation is that, you know, you’ll write code and you test through an error and the error is because your code failed or you’ve made a mistake or, or you have to correct something, but there’s also the other situation, right? When you have to write a new feature or you have to change the code for whatever business logic and your test is now wrong, right. And there’s always a balance there. So, I’ve also seen situations where people basically say that, okay, need to have a lot of code coverage, need hundred percent code coverage. And I’ve also seen situations where that actually leads to a thing where you cannot change a code because as soon as you change one place in the code, a thousand tests are broken and you have to go and fix all of that. Right. So, is that kind of like a sign? Is there kind of like any design practices or any design recommendations on how to design a test suite so that it is similar? It is not going so brittle, and it doesn’t kind of break everywhere?

Brian Okken 00:37:14 Well, there’s a few things about that. So yes, there’s ways what the primary way is to focus on behavior, testing behavior instead of implementation. So hopefully I partly write the test so that it can change the code so that I can rewrite chunks to make it maybe something I’m proud of or just because it’s fun. It’s sometimes fun to rewrite chunks if the code is changing because the behavior has changed, then hopefully tests will fail because they’re testing for the old behavior. And so yeah, we want that to happen. The other side is if, instead of testing behavior, I’m really testing implementation, that’s kind of where that’s also one of the dangers of mocks is utilizing mocks a lot in your system might cement you into one way of doing something. So be careful around that, things like a payment gateway mocking that I know I’m going to want to mock the payment gateway.

Brian Okken 00:38:09 So it’s okay to, I mean, that’s a known, you made a decision to do that, but I wouldn’t mock all over the place just so that I can isolate a function from its dependencies. If the dependencies are part of my system also because I want to be able to change the implementation and do something else. One of the problems often with riddle tests is because they’re testing, implementation and not behavior. And so then if you change the implementation, your test breaks, we don’t want that. The other aspect of it is, user interface components. So, testing around UI components is a design thing and that’s difficult. So, I generally don’t write very many tests around a user interface. I like to test against an API instead, those are often less brittle. But if you’ve got like workflow stuff, like if I’ve got like lots of ways you could use this system. And so I’ve got a lot of different workflows tested at a high level for the whole system, with the database and everything and thinking that those are brittle, that’s like your customer, that’s how your customers use it. So if those tests break, when you just refactor something, your customers are going to break also. So, there’s a problem in your system.

Nikhil Krishna 00:39:18 Yeah, no I hear you. So it basically, like you said, it depends on what kind of tests are breaking, whether it is a group of implementation, focused tests, like the UI or something like that versus, you know, you’re testing multiple different ways to use your API and you change your API and then all of them break because, well, that was a big change to a contract that you have for all your interfaces. So that’s a great point, but okay, let’s take it another slightly different track. So now I have a failing test or I’m running a large test suite and I get a failing test. Right. And pytest basically says, okay, you know, there is a big red dot over there and it says, it’s failing. And this is not equal to that. Is there a way to kind of get more information about what is failing? Can I kind of like focus onto a particular test or a particular way to kind of debug into it and kind of figure out what happened?

Brian Okken 00:40:17 Yeah, so hopefully it fails again. So, if you heard the whole, the joke about the software engineer in the car, so that there’s like three engineers in a car, they’re going down a hill and the brakes give out and they can’t stop. They finally stop the car and they’re worried about it. The hardware engineer says, well clearly it’s a brake problem. We should investigate the brake system. And the electrical engineer says, you know, I think the mechanism to just to indicate that we’re breaking might be broken so we should check the electrical system. And the software engineer says, before we do anything, we should push it up to the top of the hill and see if it does it a second time. So, in software we try to reproduce the problems. And so one of the cool features that I like around pytest is the last failed system.

Brian Okken 00:40:58 So it’s always keeping track of what’s going on of which test passed or failed. And especially the failures, it’s going to have a list of those. So that’s already built into the system to keep track of that. And we can use a flag it’s LF or last failed. And there’s a bunch of other ones around that, like failed first or stuff like that. But I can say just rerun the last failed ones. But one of the benefits of that is when I rerun the last failures, I can have more control over it. So like, I can say, give it a dash X for fail off. Don’t run more than one, like find the first failure and just stop there. And then I can say like verbose, I can have it be more verbose and I can say things. And that just gives me a like more trace back.

Brian Okken 00:41:44 It fills up the trace back more. I’m also going to have control over the trace back. I can say, I want a short trace back or a long one or the full one. And then the one I really love also is to show locals. So when it tests to have for during the trace back, also print out all the local variables and what their contents are. It’s really handy to be able to rerun that again. And then for big suites, there’s a stepwise, that’s came in a couple versions ago that I really love using to where I can say, instead of just doing the last failed, I can step through a suite. So I’m going to run this suite until it hits a failure, then it stops. And then I can maybe change some code or change the test or add some more debugging. And then I run it again with the same stepwise. And instead of starting at the top, it starts at that last failure. And just reruns that if that passes well, if it fails, it just stops again. But if it passes, it continues to the next failure. So it just keeps on stepping through to all the next failures. It’s really handy.

Nikhil Krishna 00:42:44 Very cool. Yeah. Very cool. That’s actually something I did not know. I’m going to try it out next. So obviously pytest is a tool that we can run on the CLI and it’s a standard scripting tool. Is there any special considerations that we need to think about when we ordered into our CICD pipeline? Does it have any dependencies that need to work with whatever amount? Or can we just use it as a part of the Python requirements file any time?

Brian Okken 00:43:14 I usually separate them out to have not as the requirements for the system, but have test requirements. So either a separate requirements file. So some people do that of, of like two different for an application. If you’re using requirements, you could have a separate requirements. It’s usually called Dev though, because we want our developers to have it also, but CI system can load that or there is like, if it’s a package project, you can have extra dependencies. So I can say like, you know, PIP install Fu with brackets in it, test or something, and then it brings in the test requirements. So those are ways you can do that. But then other people just have that as part of their CI pipeline to say, Hey, it’s going to have to bring in pytests. So pull that in.

Nikhil Krishna 00:43:57 Right. Is there kind of like, I remember you mentioned in your book, there is this tool called Tox that you can use for testing various versions of Python and managing environments and stuff. How does that kind of fit in into the whole pytest story, testing story?

Brian Okken 00:44:15 I like to use them together, but there’s, I mean, there’s other versions you can do, but so in like continuous integration, you’ve got the server running your tests. But you can do something similar locally with Tox. Tox is another option as well, but I particularly like Tox and traditionally it’s around testing multiple versions of Python. So if I’ve got, like, let’s say I’m developing a library, I want to test it against several versions of Python. I have to have those versions loaded on my computer for this to work. But if I run, I can set up Tox such that it will create virtual environments with multiple versions of Python and then build my, not just load my software, but build it in those versions load and then run them and test everything out, run my tests within that environment. I can also make it just do build once and then test against that.

Brian Okken 00:45:08 And actually I probably misspoke. I think it just does it build once, but it’s like a CI pipeline on your desktop so that you can test a whole bunch of stuff out. So that’s really handy to be able to test out. You don’t have to do it against multiple versions of Python though. It could be something else. Like, let’s say I am writing a django plugin and I want to test it against multiple versions of django. I can set up Tox to do that, to test on multiple versions. And then yeah, within pytest, it’s kind of fun. I didn’t learn this until I was developing the second version of the book, is there’s a cool way that you can use Tox and pytest together to debug just a single Tox environment. So, like, let’s say pytest, you know, Python 310 is breaking for your package. You can rerun and set up all those extra flags, like the show locals and all that stuff. You can pass those in and just set it to one environment which is pretty handy, or you can use PDB and step through it just right there.

Nikhil Krishna 00:46:11 Right. Great. So I think we kind of like reaching towards the end of our discussion here. Is there anything that we missed that you would particularly like to talk about in terms of our discussion?

Brian Okken 00:46:25 Yeah, one of the things I really want to, we brought up test driven development once or for a little while. One of the things in a lot of the TDD discussions talks about is, testing your code is worth it. I believe that, they also say when you start doing it, developing with tests is slower, but it’s worth it because you’ll have less maintenance in the future. And I just don’t buy it. If I think developing with tests is faster than developing without tests. I don’t think it’s slower.

Nikhil Krishna 00:46:56 That’s an interesting hypothesis. Is that based on your experience or is that kind of, I mean why do you say that it’s faster?

Brian Okken 00:47:04 Because I’m doing it anyway. So I’m like, let’s say, like you said, if I’m writing like a main function to call my functions, writing tests to call my functions is easier and it’s not that big of a leap to go, okay, what tests did I write just to develop the thing I probably can spend like 45 minutes and clean these up and they’d be a decent test suite for my system and to begin with, and then I’ve got checking it in. So, I’m using helping tests to run my code while I’m developing it. I’m using tests to help make sure it doesn’t break in the future. It doesn’t, I don’t think it takes long for you to learn testing and be comfortable with it enough to where you’re actually developing faster than you would without tests.

Nikhil Krishna 00:47:45 Yeah. I mean, I tend to agree even, especially with a framework like pytest, which is so flexible and like you said, it’s so it’s so easy to do it, that you almost feel tempted that, you know, like, wow. I mean, it’s such a beautiful way to do it. You don’t feel like, you know, you wanted to write some tests. So, yeah, that’s a great point. So just in terms of completeness, so how can our audience follow or connect with you? I believe you are already a podcast host and you have a couple of podcasts and we’ll add links to those podcasts here, but maybe you want to talk a little bit about other ways, maybe a little bit about the podcast as well?

Brian Okken 00:48:20 Sure. This the primary way I hang out on Twitter a lot. So I’m @Brian Okken on Twitter, and then I’ve got Test and Code, which I have to enunciate because some people think I’m saying Testing Code it’s TestandCode.com. And then also the Python Bytes podcast, which [email protected]. Those are the two podcasts, yeah. And Twitter. One of the fun things about the TestandCode community is we have a Slack channel too. So, there’s a Slack channel you can sign up for. And there’s like hundreds of people hanging out, answering, asking and answering questions around testing, especially around pytest, but they’ll around other Python topics too. Like if you’ve got some weird database that you’re connecting to and you don’t know how to test it, there’s probably somebody in there that’s using it also. It’s pretty great. And I’ve started blogging again. I started this whole thing by blogging and I’m doing it again. It’s at pythontest.com.

Nikhil Krishna 00:49:15 Awesome. Thank you so much, Brian. It was a great discussion. I’m sure our audience would be looking forward to reading more about it on your book, which is Python Testing with Pytest and it’s from the Pragmatic Programmers Press, which is again one of my favorite publishers as well. So thank you again, Brian.

Brian Okken 00:49:36 Oh, thank you. I want to add one more note. The second edition also was written such that it feels like a course and that’s on purpose because I do want to turn it into a video course. So that’s one of the things I’ll be working on this year is turning it into a video course.

Nikhil Krishna 00:49:50 Awesome. Okay. Good luck with that, looking forward to it.

Brian Okken 00:49:53 Thanks and thanks for having me on the show. This was fun.

Nikhil Krishna 00:49:56 Okay. [End of Audio]

SE Radio theme: “Broken Reality” by Kevin MacLeod (incompetech.com — Licensed under Creative Commons: By Attribution 3.0)

Join the discussion

More from this show