Search
Phillip Mayhew

SE Radio 521: Phillip Mayhew on Test Automation in Gaming

Phillip Mayhew, co-founder and the CTO at GameDriver, discusses test automation for games and game-like applications. Host Philip Winston spoke with Mayhew about the increasing role of test automation in modern game development, the impact on the QA role, how to run tests in CI/CD pipelines, and how to create automated tests using GameDriver.

This episode sponsored by CACI logo


Show Notes

Related Links

Transcript

Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Philip Winston 00:00:17 Welcome to Software Engineering Radio. My name is Philip Winston and my guest today is Phillip Mayhew. Phil is a co-founder and the CTO at GameDriver, a company that brings test automation to the video game industry. Before GameDriver, he ran his own consultancy for over a decade, which specialized in performance and functional test automation for Fortune 500 companies. Phil, did I leave anything out of your bio that you wanted to add?

Phillip Mayhew 00:00:45 No, that pretty much covers everything. Just the early background, I have a degree in Computer Science from North Carolina State University.

Philip Winston 00:00:53 Great. We are going to use the term “game” throughout this episode, but I’d like to frame this in the beginning and understand that we might be talking about applications that are wider than just games. I’ve seen your company use the term “immersive experience.” How would you describe the set of applications we’re going to be talking about today? What characteristics do they have?

Phillip Mayhew 00:01:15 So, of course our name was derived off of gaming automation — GameDriver — but we came up with that name four years ago. And now as the landscape has started to change with augmented reality, virtual reality, where you’ve got people learning how to change a tire from an immersive experience. So we now have this broader application landscape that needs automated testing. So it’s not your classic, “we’re just testing games anymore.” We’re testing all kinds of industrial usages of the applications that are being put out these days.

Philip Winston 00:01:54 So there are challenges to write automated tests for any application. Maybe we can talk a little bit about those since your background prior to game driver was in a wider field of applications. And then we can zoom in to talking about the specific challenges for games. But to start, why is test automation hard?

Phillip Mayhew 00:02:13 Sure. So we can break this into two separate categories. We have sort of an, an educational dilemma, and then we have technological problems that have to be solved as well. So I like to compare where game testing automation is. Today is very similar to where enterprise application testing was, automated testing was 12, 15 years ago. You have a lot of people who have been doing manual testing, who are not familiar with how to do automated testing. So we need to be able to empower those people to write automated test cases and implement those test cases. So what are some of the ways we can do that? We can do that through training and from a technological standpoint, we can empower them with tools. In this case, we’re using game driver to empower our game testers, to write automated test cases. That covers the educational side of things. From a technological standpoint, when you’re testing an enterprise application for functional automation, the buttons are generally in the same location.

Phillip Mayhew 00:03:25 Text is very static. We’re looking at a static 2D environment where feedback is very specific on what a user does. When you look at a 3D game, I mean, it’s a completely different landscape and precision of points becomes an issue. If you move a control pad, a deep pad a little bit, all right, so we’re talking floating point values now. We’re no longer talking pixel exactness. So now we’re having to deal with those kinds of issues. And while it seems complex to write an automated test case for a game, we’re not trying to be able to walk a character from start to finish through a game. How do we defeat the dragon to get the key to save the princess? We don’t have to have a complex scenario to be successful.

Phillip Mayhew 00:04:21 There’s an old enterprise application, automation thing. It’s something like with 20% of automation, you can complete 80% of your testing. So the old 20, 80, and I don’t know that we can successfully draw that kind of comparison and game testing now, but you’ve got to look at defects that are raised by testers or developers and say, “Hey this is a reoccurring issue, or this is something we could automate easily and cut down some of our manual testing time.” We’re not trying to wipe out manual testing. That’s not the goal. The goal is to save time, which saves money. And we can do that through using tools in this instance, game driver, to simplify the process of being able to create repeatable test cases that generate repeatable results. And with that, we can add confidence into the regression testing or a minimal acceptance test suite that allows us to have confidence so that we’re moving forward in our development life cycle. And we’re not introducing a lot of issues or, finding issues that we didn’t know existed. So sure there’s going to be plenty of edge cases that are difficult to automate out, but there is a still a large portion of testing that can be automated and it can be done very simply.

Philip Winston 00:05:49 So one of the things you mentioned was maybe to contrast enterprise software and games. And one aspect of that, that I was reading about is games used to be sort of one and done that you’d ship the game way back when, on a cartridge or a DVD. And there weren’t any updates after that point. And then we get through a period where the games are still on standard media, but maybe they’re updated, bugs are fixed, but I think now we’re reaching a point where many games are sort of developed and maintained indefinitely. Have you seen that trend and how does that correlate or impact automated testing?

Phillip Mayhew 00:06:28 I think that’s a bit of an ideological question or dilemma that we have here, because if you look back and I’m 40 years old now, I remember having Windows 3.1 Windows 95. I mean, once you installed that, that was it. But with the mainstream introduction of always on internet, you’re constantly getting windows updates and it’s not just games that are doing this it’s software across the board with your mobile devices. It’s so easy for developers to just push updates. So I think that it’s a plus side is that if we have an issue, sure we can easily fix it. But I wonder that it also is limiting the testing that happens because now that we can say, Oh we can push an update at any point, there’s no need to necessarily spend as much time testing a product. And so it’s just getting pushed out the door a lot sooner with hopes that patches can be pushed downstream at a later point. We’ve recently seen in the news where that has had repercussions of detrimental business results where people have said, we’ll just push it out the door and do updates later.

Philip Winston 00:07:43 Let me mention a previous Episode here. It was Episode 339 with Jafar Soltani on Continuous Delivery for Multiplayer Games. And one of the comments he had was we are heavily relying on an army of manual testers to test the game. But relative to this idea of sort of a long-lived application, he mentioned that once you sort of got rid of those manual testers after the initial release, or they drifted off to other projects, you lost the confidence in being able to even make small changes. So I imagine with automated testing, one of the goals is to give developers the confidence. Is that an objective that you see with your work?

Phillip Mayhew 00:08:26 Yeah, I think so. I mean, you can substitute manual testers here with developers. A developer moves off a project and what’s gone well across any, the development history of software applications retaining constantly updated and up to date documentation has always been a problem. And so when you remove an individual who has what seems like an infinite space of knowledge in his head to another project, that information just is, is gone. Having automated test cases as a way to retain information and documentation as a result of doing automated testing. So when a manual tester moves on, a lot of knowledge has gone with him. And so if we are taking some of that knowledge and we’re investing in building automated test cases with it, when they go, the developers can know, all right, we still have this huge suite of testing that’s been done that we can, again, executing without having that guy’s or that person’s knowledge that was specifically doing the testing.

Phillip Mayhew 00:09:37 So I think that it’s a way of building documentation without saying, sit down here and write out a bunch of documents about what was tested and how it’s tested. And the developer can sit there. And it’s not just going back through emails. He can specifically look at the test cases that were written, that were executed and have some sense of confidence that we are, we have code coverage on this. And if we execute these tests and I make a change here, then we can sort of have a feedback loop to know and have confidence that we’re not introducing new breaking changes. And I mean, this is, we’re talking about games here, but that’s no different than, than any other application that’s being developed. And when people leave off a project, how do we retain confidence in what we’re still pushing out?

Philip Winston 00:10:24 So in that previous episode, there was a lot of talk about unit testing and they did have a lot of unit tests, even though they were still doing manual testing. So what is the difference? What is the line between unit testing and the type of test automation you normally deal with and does it have to do with whether the entire application is running or are there other factors?

Phillip Mayhew 00:10:45 Yeah, so I think the difference there is we also need to focus on user input. So when it comes to game testing, that is a key component. What is the user doing on the gamepad and how is that impacting what’s happening in the environment? So what I’ve deemed are the two most important parts of automated testing is one doing accurate inputs that would simulate what a user is doing. And two, how do we validate that? So we’ve got an input and then a validation feedback loop to continue to test what we’re trying to test and understand the results of that test. So while a developer has been very focused on unit tests, what specifically is happening on this method execution or in this class instantiation or whatever it is. And on the flip side of that, we’ve got manual testers who are testing as this is a black box.

Phillip Mayhew 00:11:43 They have no insight into what objects are doing, what, and they’re only doing validation based on visual cues. With the automated testing now we’re empowering them to be able to enhance what they’re validating, alright? So we push a button, this block turns red, alright? We see it’s red but let’s validate that it is the correct hue of red. Maybe you’re working on a game that is for people who are colorblind. So, there’s specific things that need to be validated that aren’t necessarily easy for a manual tester to do, but it’s very simple to do in an automated fashion. So to separate that line, I think that we have to think more about user input, what it’s driving and focus less on the specifics on again, on unit testing, alright? That’s handled by the developer, but now we’re an automated tester. What can we do to add value to our testing life cycle here?

Philip Winston 00:12:46 Okay. One more question that’s kind of defining our terms and then we’re going to jump into sort of the process of adding automated testing to a project. Then we’re going to talk about game driver and we’ll try to cover everything. So this last defining our terms question was I saw the term collision testing, and that’s not a term that is normally used with regular applications. What is collision testing and what difficulties does it present in automation?

Phillip Mayhew 00:13:16 So when you have objects and games interacting, collisions happen, sometimes they’re important and sometimes they’re not important. And as a manual tester, it might be easy to see a collision happen. You know when you bump into a wall, can you actually move through the wall or you do physics enable and you bounce off the wall? Whatever the game structure is for that, we do have the ability to sort of supplement from a testing perspective. How do we register a collision is happening and can we register collisions are happening on specific objects when certain events happen when the user bumps into a wall, alright? Let’s make sure that the collision code is actually kicking off and that something is happening. Again, those are very easy to do visually most of the time. Or if you have very tiny objects that are colliding, maybe from a visual perspective, they’re not easy to do.

Phillip Mayhew 00:14:17 But what if from an automated perspective, we can have some sort of helpers that are available to signify that yes, the collision happened. And now we can build more testing. That’s covering a huge sweep of objects in different scenarios that are, that where collision is very important that we understand what’s happening. So we can build a huge data set of collision interactions and test all those in a massive sweep of automated testing without having to sit there and look in an Excel sheet of a hundred different things of the collisions we got to test and validate.

Philip Winston 00:14:55 Yeah. So that does sound pretty specific to games or simulations, or I guess, VR environments, stuff like that. So now let’s talk about automated testing in terms of like, we’re adding it to our project, maybe what are some of the steps we do? What are some of the things to watch out for? So what is the desired impact of automated testing on a project? Suppose software game project has minimal automated testing, and you’re going to help them ramp that up. And after a year or something, they have a lot of automated testing. What differences are you trying to drive there?

Phillip Mayhew 00:15:32 So from the beginning, I guess, sort of a personal goal of mine has been to limit the impact that implementing our product has on being able to add automated testing. Like we don’t want you to change your architecture of your game. We don’t want to complicate builds. We want to try to be as lightweight and as simple as possible so that it’s less things that developers have to figure out to make this work. We want it to be very seamless. So in the beginning, people are like, I think they get overwhelmed by like what do we do? You got too many moving parts. So I think one of the easiest ways to get started is alright, let’s do something very simple. Usually when a game starts your intro screen comes up and hit the start button.

Phillip Mayhew 00:16:26 So let’s write a simple test case that, it starts your game. It waits for the object to appear that says press Start button, and then we press Start. And then let’s validate the correct scene loads. And the game is ready to go. And in whatever state capacity is required to identify that the game is ready. And once you get people to write up just a simple test case, it’s like the wheels start turning and they just, it’s just coming to them because they’ve been in their project. They know more about the game than we would. And as the wheels are turning, they’re like, oh wow, we can do this, we can do that. Suddenly all the defects that they’ve opened previously are just flashing up in their mind and they’re thinking, oh yeah, now we could automate that pretty easily.

Phillip Mayhew 00:17:16 We can test that every time. And one of our first customers, I can’t remember the running total, but I was, last I heard I was kind of blown away at how many test cases that they had written. So it’s like anything else, you do a lot of hard work, maybe on a test case. Like, how do we do this? But now we can copy some of that test case, and we can reuse it over here for doing something very similar. It’s like developing a hard product. You develop a piece of it, oh now we can reuse that. And, before, you’re just turning out test cases. And I think there’s probably, at some point you leave a proportional limit where you’re, you’re creating more test cases than might be adding value in some respect, but it really opens the door. Once you get past that initial hurdle of alright, let’s get it installed. Let’s add it to the game. All right. How do we connect everything together? And once you get past that, the wheels are turning. Developers are very excited because now it’s taken a little bit of burden off their shoulders. And they’re going to be able to shift that back to an automated test engineer who’s going to help them figure this stuff out and make it work.

Philip Winston 00:18:32 Yeah. I’ve seen the same thing with regular applications once the framework is in place. And once everyone sees the, whether it’s hooked up to CI or whatever’s reporting the test results, once you sort of see that process, it can grow from there. So how about the tradeoffs of when to write tests in the development process? I can imagine saying, hey, write them as early as possible, but especially with a game you might be iterating and changing the game a lot. Could you end up writing tests prematurely? What have you recommended? Or what do you see people doing?

Phillip Mayhew 00:19:07 Yeah. And I think this, very closely relates within with application. I mean, you’ve got, if you get testers in too early, they’re writing test cases, and then maybe some development happens on the back end and suddenly those test cases are invalidated. It can become a very complex loop where you’ve got people writing requirements and maybe the testers aren’t reviewing those requirements or unaware of those requirements. Again we’re in manual testing right now. And those testers are probably not even aware of requirements, documents that are being written to determine how development’s going to be done. So there is going to be some delay there. You’re not going to be able to throw testers in immediately, but I think as games are being developed, we need to be cognizant of that, alright? Now we can do some automated testing.

Phillip Mayhew 00:19:58 So should we shift how we’re developing games a little bit so that we are opening up the opportunity to start doing automated testing earlier than we would normally throw a manual tester in. And so we don’t want to throw a manual tester in, or throw automated testing in and raising a bunch of defects, which well, yeah, of course we know that doesn’t work and it’s not supposed to work. And yeah, we kind of sort of fudge that right now and it’s going to change anyway. So don’t open defects on this stuff. So I think there’s going to have a little bit of thought process into, alright, are we developing games in such and such? It also make sense to take advantage of automated testing or any kind of testing at an earlier stage. And maybe big studios are already doing that to bring manual testers in sooner to help validate things.

Phillip Mayhew 00:20:48 I can’t really comment to that but I think there is the opportunity to test early. People like to say let’s fail fast. That’s one of the going logics around there today. So yes and no, some opportunity may exist to test early and some opportunity may not exist to test early. But testing early can be a good thing and it can be a bad thing. So I think there’s an argument for both sides and I think there’s ways to make it work, but whether that’s always an opportunity? Maybe, maybe not.

Philip Winston 00:21:28 So we’ve talked about this a little bit, but who is actually writing these automated tests? Is there an opportunity for manual testers to learn just enough programming if they don’t know it already? Or does it require, software engineer who’s very experienced? What range of abilities or backgrounds can people have to write successful automated tests?

Phillip Mayhew 00:21:52 Personally, I think that’s a very interesting question because in my opinion if you look in the landscape and when you’re in college and you’ve got all these younger generation who are huge gamers and the one thing they want, oh, what do you want to do? Oh, man, I’d love to get into working at a game studio, right? How do you do that? Well, I don’t know. I’ve created some of my own games and I’m just, I’ll just keep interviewing and hoping I break into the industry. I think a lot of people know that that’s a very tough thing to do. And what this actually does creates an opportunity where people with some development experience, fresh out the school, they learn testing and they can implement these automated tests pretty easily because they’ve taken their own initiative to learn how to use engines like Unreal and Unity.

Phillip Mayhew 00:22:49 And they understand the basics of game design. They’re taking game design classes in college. They may have taken some testing classes as well, but we have this huge population. And I say younger, but there’s also older people who have just been doing development, who’ve always wanted to do game development, but it’s hard to break into the game development scene on a larger scale. And this opens up an opportunity for them to learn a lot how the underlying games are being developed. They’re able to understand that. They’re able to write test cases for this. I mean, if you look at application testing to date, test cases and again, like I said before, we’re not writing comprehensive AI to go from start to finish of a game. We’re just trying to write simple test cases that perform actions, and we validate what those actions are doing.

Phillip Mayhew 00:23:50 So we have a huge population of people who are technologically and developmentally capable of writing these test cases. And it’s going to give them the opportunity to sort of try to break into the industry because they’re now a part of the development community of gains. And this, it’s offering an opportunity for them that they might not have previously had. I’ve talked to a fair amount of people as we’ve been hiring employees. They’re manual testers and games. They have some development experience, and they want to be game developers, but they haven’t broken into the industry. So this is going to open up doors for people to have a chance to get a foot into that industry, I believe. And as the, I think the game automated testing industry continues to grow. It’s going to offer a lot of job opportunities for people.

Phillip Mayhew 00:24:44 I mean at the end of the day, we want to be able to provide a tool that is creating an underlying industry that allows people to get jobs, train up, learn expertise. And I think that’s going to start to happen. You probably have a lot of manual testers. We’re always going to need manual testers. So if you have somebody who is not necessarily the, is willing to sit down and learn how to write code, but writes great testing documentation, and we’re still going to need manual testers that are not going away. So there’s still a lot of opportunity for that. And we are creating tutorials, we’re creating videos, and we want to continue to empower people to learn how to do this. And I think there’s going to be third parties who are writing blogs on how to do this stuff. And we just want to see the industry continue to grow and get to where enterprise application testing is, where most of your testers can sit down and write automated tests. And I think that’s going to be a real boom for the industry.

Philip Winston 00:25:48 Yeah. I can see it’s making sort of an on ramp where people can incrementally develop their skills. I think that’s really interesting to think about the life cycle of someone’s career and just the growth. So we talked a little bit about what that first test is checking that the start screen comes up, but let’s say we have an existing application without any real automated test. How do we come up with a list of the things that are going to be testable with automation versus what do you stay away from and say, Hey, we’re not going to tackle that. So maybe rather than the first day, this is like the first three months or something.

Phillip Mayhew 00:26:24 I think a good starting point is, implementing MATS, which is Minimal Acceptance Test Suite of a test suite, which we have some minimal criteria that we need to ensure happens every time. Maybe the Start screen is one, something like, can we create a new character once we start the game, can we save the game? And I think starting with just some simple but crucial tests that we execute every time, we look at our manual tester spreadsheet or list of tests that we are executing every time, right? Which of these are going to be quick and simple and powerful so we can implement those? Let’s also reflect some of our high priority defects that have been opened in the past and ascertain whether we should create some automated testing around that. Because maybe that’s some fragile code that we have in there that we’ve run into problems where things break that. As an outsider looking at a game, I can’t necessarily answer that for a specific game, but a developer who’s worked on that game.

Phillip Mayhew 00:27:35 Those things are going to pop right out to them. It’s going to be very obvious for them, probably same for manual testers. It’ll be like, every build we get this thing is broken or a piece of it doesn’t work. And these are quick wins and every time a manual tester has to open a defect for something, it’s burning time, it’s burning cycles, burning hours thatís burning dollars they could have, if we can automate it, run it, automate the opening of a defect for it, we’re saving time. Anytime we can save time, it’s a win. So I think that’s a good opportunity in the first one to three months of let’s get some wins out there.

Philip Winston 00:28:18 So one of these tests being run is, seems most likely you have them running during your CI, your Continuous Integration. Is that the answer that’s where these automated tests run or are there other possibilities?

Phillip Mayhew 00:28:32 I think we’re probably going to see a mix. Definitely the CICD pipeline is a good integration point. This is no different than any other application testing. You let’s say you’re using Jenkins. For instance, you have Jenkins skilled off kick off your build pipeline, and then after build a successful, all right let’s run our automated test suite and see what happens. We also have the ability for developers who are working on something to say, Alright, I’m going to I want to check in this code, but before I check in this code, let me review the automated tests. Maybe I don’t want to necessarily commit the code and wait for the automated build process to happen and integrated testing to happen. And let’s say, I want to go ahead and just sort of test that manually on my machine, run the test suite before I check any code in. So I think there’s going to be opportunity between sitting here at your desk and executing some test cases and writing test cases, as opposed to adding it into the build pipeline. Your automated test engineer is going to literally have to sit there and test as he is writing these test cases as well. So it’s going to be happening in two different places. Itís almost as integrated test cases are being kicked off, sort of become regression because as we’re writing these, we’re also testing these test cases as well.

Philip Winston 00:29:57 When a test fails, what does the developer want to see? So they have a name for the test, and I guess sometimes the test names can be long and maybe descriptive, and they get a red indicator that it’s failed. What else do they need in order to then debug the issue or fix the problem?

Phillip Mayhew 00:30:19 So there’s a couple things here. We, of course, in our product we’ve added the ability to capture screenshot. So if a test case fails, well, let’s grab a screenshot of what happened. Something more comprehensive may come down the road where you want to video of what happened and a log of what happened. So we have our, what we executed in our test case is essentially our log there, but also like end unity, let’s save off a copy of that log because it probably has some more information as well. So it’s a combination of things of your typical, no different than if a manual tester did it, except we’re adding some automated capability of grabbing a screenshot of wealth of what happened and there’s Unity again. I’m just could be Unreal as well but Unity, one of the partners companies that we work with back traces, is also collecting logs on failures. So I think there’s a combination of tooling. We don’t want to necessarily be responsible for everything. We’re trying to fill a specific part in the ecosystem and integrate with other tools who are focused and are the expertise and doing that other part. So it’s no different than tracking any other defect, I believe.

Philip Winston 00:32:16 One more general question, and then we’ll really dive into game driver specifics. What about performance testing? I know, again, your background prior to GameDriver, you did a lot with performance tests. Is that a different breed of tests than a functional automated test, or is it just another type of test you’d include, I guess the question is what specific to performance testing does one have to consider when making an automated test?

Phillip Mayhew 00:32:43 I started doing performance testing 15 years ago, and we’re talking purely at the protocol level, right? So if you have a web application, we’re sending that HTTP traffic over and we’re measuring the response of the application and with the massive multiplayer online games now, you still need to test all that underlying architecture. And as testing has changed from a performance standpoint, we still do a lot of the protocol level stuff, but also where people started spinning up these different types of tests where we’re not just simulating a bunch of users, but we’re spawning a thousand browser instances. So we’ve got two different types of performance testing that are happening in our web based application. And the same could be said from a client perspective. However, when youíre performance testing a game it’s not just a lightweight browser page, you’ve got a huge resource consumption that is happening on the game, which is essentially the client itself. So it’s, I think we’re still going to have to retain our classic performance testing where we’re doing protocol level testing to our server architecture. And we’re not necessarily spawning up a thousand game instances to drive performance testing through an automation framework.

Philip Winston 00:34:07 That’s interesting. I wasn’t really thinking of the back end of the game, but that’s more like a normal web application where the front end is the graphical. So I guess to be clear, now we can you start talking about game driver in particular. Do you write tests for the back end as well as the graphical client or the game driver is focused exclusively on the graphical part?

Phillip Mayhew 00:34:28 So the power of reflection end unity, we have the capability of doing both. So that’s one of the advantages, sorry, my classic performance test misinterpreted your question, but yeah, when you’re thinking frames per second, how well is the client performing? So yeah, all that is very pertinent information. I think a lot of that is still going to, there’s still going to be manual testing and identifying whether a game feels slow. So our agent is yes, very lightweight and we’ve looked at ways to continually to mitigate that it’s not impacting frames per second, so that even if you’re running an automated test, we can still pull the frames per second and make sure that the game is functioning from a performance perspective without worrying about the impact of what we’re doing.

Philip Winston 00:35:22 GameDriver works with Unity today. Can you speak a little bit about Unity’s role in the game ecosystem, why it was chosen as your first game engine to work with and just a little bit about Unity?

Phillip Mayhew 00:35:34 Yeah. So the, before me, so my other two founders, Rob and Shane who were friends before we started the company, wanted to get into game development and Unity has a very simple, I guess, knowledge cost to getting started with somebody who wants to write a game because they had some familiarity with C# where they hadn’t never written C++ before. So that sort of took Unreal out of the equation so they were interested in, alright, let’s build a game. And then, well, alright. If we’re going to build a game, how are we going to test a game? Because they were, both of them were for the testing side of things, of enterprise software. So they’re already thinking ahead and they’re like, well what are our options? And that sort of led to the birth of GameDriver. And when Shane asked me if I wanted to be a part of this and I was, we would start with Unity first. And I think he chose Unity because he was interested in doing game development with it. And that’s sort of, well, let’s start here and let’s build a concrete product before we try to expand. So that’s kind of how Unity was happened to be chosen as the first target of our product.

Philip Winston 00:36:52 Can you speak a little bit to multi-platform testing? I know that Unity runs on many different platforms, including mobile. Where would you recommend a developer runs the automated test? Do they have to test on every platform they ship on? What platforms does game driver support? Just give us a picture of I’m writing a game that runs on many platforms. How do I test it?

Phillip Mayhew 00:37:15 At the end of the day, running your automated test on Windows, for instance, is only going to give you so much validation. Game driver has the ability to be deployed on Android and iOS devices so that you can interact with those tests. We have support for some device farms as well. So if a developer doesn’t have an Android device on him and he wants to run an automated test, he can spin up a device on the device farm and run his test. If you’re targeting a platform, you need to make sure that it’s being tested on. If you weren’t, let’s pretend you weren’t doing automated testing. You’re not going to deploy a game or an application on a device without it being manually tested. So anywhere we’re manually testing, we would need to see how we can introduce automated testing to help facilitate that test load. And again, add all the benefits of doing automated testing to begin with. So we support Windows, Mac, Linux is coming Android, iPhone iOS platform. We are also starting to move into the console market where Unity is supported. Switch is being targeted. Xbox is being targeted and hopefully Sony will be targeted in the future.

Philip Winston 00:38:30 You talked about writing that first test, the first automated test in sort of a generic way, but with game driver, can you walk me through, in whatever details appropriate for this conversation? Walk me through how I would add that test and maybe there’s a tutorial online for more details.

Phillip Mayhew 00:38:50 Yeah, sure. So the first thing would be installing the Unity package. So we have your standard plugin type Unity package. You import the package into your Unity game, create a new game object, which is going to host the game driver component. Then add the script component for game driver. That’s now listed in your script drop down. Once that’s there spin up an instance of visual studio or router, whichever you want. Now, I like to, if we’re writing test cases, we’re going to use a testing frame like end unit, but I like to keep it even simpler. Let’s create a console application in visual studio or router, whatever. All right, first thing we do add the required references for game driver. So I think there’s maybe four references that need to be added. Instantiate, add the using statement at the top.

Phillip Mayhew 00:39:39 Again, we’re all C# sharp here. Add the using statement, instantiate the API client with the new empty constructor, add a connect statement. Alright we’re going to connect to our local host on a predefined port that is configured in the agent. We’ll use our default port, alright? We’re connected, alright? Let’s wait for an object, alright. So the first thing we’re going to do is add an API, wait for object. So if, again, our Start screen is coming up. We want to wait for the Start button to be visible before you click it or hit enter or whatever. So we’ve created which we actually have a patent technology, which is called hierarchy path. So hierarchy path is very similar to XPath, but allows us to reference objects in the Unity game tree by a stringed path, very similar to XPath.

Phillip Mayhew 00:40:32 This allows testers to write tests in a way that aren’t relying on coordinates and aren’t necessarily relying on the exact structure of the tree not to change. So if our start button was in the root, but we refactored some things and now it’s embedded down into the, a couple canvas layers or something like that. The tester can still execute the same test over and over. So we would write, we also have a plugin for that as well, to create some rudimentary hierarchy path for an object that you select in the tree. So we get the hierarchy path for our Start button. We do our client dot, wait for object, pass in the hierarchy path. And then we’ll do a client disconnect and, boom, there’s your first test case. So we’ve created a simple console application that connects, waits for the object and then disconnects.

Philip Winston 00:41:26 So you mentioned your hierarchy paths are similar to XPath. Can you remind me what XPath is?

Phillip Mayhew 00:41:32 Yeah, so XPath is, let’s say we have an XML document, which is a node leaf tree structure. We have attributes assigned to these different nodes or the nodes have names themselves. So if we are looking for a node that has a tag called button, we could simply hit forward slash, forward slash button, and it’s going to go down through the tree wherever it is. And it’s going to look for a relative path of an element that has the tag button, and that’s the object we want to work with. So if, you move that Start button down 10 pixels or whatever, it doesn’t matter if you move that button anywhere else in the object tree, it doesn’t matter. We’re still going to find that. One of the first things people always bring up when we’re demoing higher game driver is like, Well what if you have a massive tree, this is going to be slow. Well, slow relative, maybe, but again, we’re doing automated testing. It’s not like we’re inserting plugins that are trying to reach frames per second on test. We’re executing a test case. If it takes 300-400 milliseconds to identify that object, okay. It is what it is, but we’re able to achieve our impending goal of performing in an action and validating the result of that.

Philip Winston 00:42:56 That’s interesting you compared to an XML document. So I know end Unity and other game engines, you end up with a hierarchy, which is kind of the world and the levels and the things in the world. And so you’re talking about navigating that hierarchy. What does it mean to interrogate the game? I saw that reference is that Reflection or is that making calls to an API that the game provides what’s interrogation in your game driver terminology?

Phillip Mayhew 00:43:25 Yeah, it could be both of those things. So Reflection is a very powerful thing and you get it for free in Unity and that’s great for our product. So if you need to look at a specific value of a component you can get it fairly easily. And we can test against that. If you want to write specific code that you want to execute, that does something even more complex, then we can still call those methods from the API client. So you can have a mix of that. If you want to embed a bunch of debug code and execute it, not a problem. If you don’t want to do that and you want to still introspect variables at different times doing different things or flag, when that hits a value all those things are possible.

Philip Winston 00:44:09 Another term you used a little bit ago was end unit. I think that’s the .net version of J unit or variation of J unit. So I wanted to mention the previous episode here, Episode 167, The History of J Unit and the Future of Testing with Kent Beck. I think that might be an interesting background for this. So you talked about creating a console application that runs a game driver test. Then at what point would you recommend using a framework like end unit and what are the advantages of a framework with game driver?

Phillip Mayhew 00:44:44 The console is a very simple, let’s just make it work. Let’s not add the complexity of setting up an end unit with startup and tear down fixtures and all that complexity. Let’s just keep it simple with the console application, but once you’ve made that work, alright, now it’s time to actually migrate to a real testing framework. So we’re talking about end unit, which we create tutorials off of as well. But in reality, you can use any testing framework because ultimately we’re instantiating the game driver, API client, and we’re executing things, whatever things is. So we can use any test framework to do that. But once you got the basics and establish that you can connect and do something. Alright, now it’s time to start rolling this into a testing framework, like end unit, for example, and start building real test cases.

Philip Winston 00:45:38 So I think we’ve got things started. We’ve added tests, we’ve added the testing framework. Let’s talk a little bit more, just some details about game driver or some situations I might run into if I’m a developer using game driver. So you mentioned screenshots on errors, but what about recording and playback in general? Is that part of some tests or all tests? Whatís the role of recording and playing playback of gameplay?

Phillip Mayhew 00:46:06 We’re about to release, I guess, probably a beta version of our recording tool. Now, what is a recording tool? Alright, so recording and this goes back to even application testing, the ability to record and just play back specifics as that there’s so many variables and when you introduce, games in general into it, you’re adding massive more variables. So the reality is that you’re never going to have a simple record something and play it back with a hundred percent that it’s always going to be reliable. You’re always going to have to work off of it. So what does recording add? Well, recording adds you the ability to have a starting structure of your test case. So you don’t have to figure out all the minor details via code. You can create a scaffolding of your test case.

Phillip Mayhew 00:47:03 So we can record your flow of what you’re testing. You’re moving through the game at a specific point. We can record that alright, now you can take that. Alright how do we make this a repeatable test? Let’s add some wait statements here so that while we’re waiting for specific actions or objects to have certain values, before we move through in our test case. And coming back to where we started, how are manual testers going to do this? Well, it’s, as you’ve seen, if you want to empower manual testers and test automation engineers, they’re going to need to know more about the game. Just handing them a running game is not necessarily going to be sufficient for them to be able to understand how to automate the testing of that game. Now, if that happens via documentation, information sharing, or they actually have the game running in Unity so that they can learn more about how to test that game, that’s going to vary between development studios.

Philip Winston 00:48:06 I read that game driver can run tests faster or slower than real time in practice. Do people tend to run tests at the fastest possible speed? Or how would you recommend people set the speed of their tests?

Phillip Mayhew 00:48:18 I think keeping your test cases at real time has advantages. You never know what could be introduced that could cause defects that aren’t really defects. So the beauty of automated testing is it’s not just during working hours. You can run these things 24/7. So the time criticalness of executing something faster than needs to be tested is probably less of a moot point. I mean, you could, if you need to run tests twice as fast, well, spin up two nodes of two Jenkins agents that are kicking off your test simultaneously. So I think it’s kind of a moot point whether we need to execute faster or slower.

Philip Winston 00:49:08 How about reusable functionality, I guess you mentioned end units, test fixtures. Maybe that’s the answer, but suppose I have a series of tests that all need to start with some common functionality, some common steps. Is that something that game driver helps you with or is the testing framework? How you do that?

Phillip Mayhew 00:49:27 Again game driver is just a tool. It’s not your testing framework. So we’re building a framework, maybe we have a set of code of how we start our game and we need to execute that in all of our tests if we’re shutting down the game at each time. So you’re going to create your own sort of testing framework of how you use game driver to interact with your particular game or product.

Philip Winston 00:49:54 Okay. And some couple more kind of detailed questions. One build method with Unity is called ilcpp, where the C is converted to C++ and then compiled to a binary. So in that case, it’s not running .net, I guess not in the normal way. Do you run your tests compiled down to C++ for a game, or would you want to run that in C# mode?

Phillip Mayhew 00:50:22 It’s funny you bring that up because that’s been the bane of many headaches for myself as far as a, from a development perspective. But when we do our own testing of our product, we support all LTS versions of Unity. So we’ve got to test 2019, 2020 and now 2021, we test on Windows, Mac, Android, iOS, and we test all of those against using mono or the.net and we test against the il2ccp to make sure that it is working because if companies are building their product to run off of il2ccp, we want to make sure that they’re able to actually automate and test with our product on those builds as well.

Philip Winston 00:51:09 Okay. Yeah. That’s a lot of configurations to test. How about the new input system versus the classic input manager in Unity? Is there anything to say they’re relative to game driver or is that just a detail that games can use either one?

Phillip Mayhew 00:51:24 So we, for quite a period of time, where you could go on and you could search about it on Google, Unity, new input system. And if you look back in like 2020, maybe there’s tons of posts where people like, oh, this thing is terrible, it’s slow, it doesn’t work. And that was probably like version 0.2 or something like that. I can’t remember their versioning information, but now they’re at version 1.3 and I think it’s 1.3 and you’ve got all these people in the community or writing blog posts. Alright, here’s how you use the new input system. And only recently have we started working on support for that. So while it has some of its own challenges that we’re addressing, it’s a con pro for on both sides where implementing some of the game driver functionality for the old versus the new was more difficult than the other but now we’re supporting it and ultimately it, we want it to be no fuss, no must are using the old input system.

Phillip Mayhew 00:52:30 Great are using the new input system. No problem. Are you using a mix of them? No problem. We’re working to support those. We’re also looking at how now that we’ve added support for these, are we going to support things like Rewired, or are we going to create some kind of SDK interface that lets people build out that compatibility, no matter what input manager they’re using, whether they choose to use Unity or something like Rewired, or they build their own how can we still enable these people to test their, their game and application? As I said earlier, critical things, user input and validation of the result of your tests. So, we need to make sure that we’re able to provide support for the test user input, whatever that is.

Philip Winston 00:53:17 Let me just flag one thing you mentioned. Rewired, is that a company or what, what is that?

Phillip Mayhew 00:53:22 I can’t tell you a whole lot about it, but it’s just a product slash maybe the same maybe company as well. I can’t remember, but they created their own input manager where their selling point is, easily switched between keyboard, mouse, Nintendo Switch control or PlayStation, any device easily. Maybe they have better performance than the Unity input manager. I can’t comment on the selling points of it, but just other than the fact that we’ve looked into it, whether need and how to support it.

Philip Winston 00:53:56 Okay. We’ve talked about adding game driver. We’ve talked about some specific details. If I’m using game driver, let’s start wrapping up today. I think game driver is exclusively for Unity. What other game engines are you planning to support and what is the timetable for those?

Phillip Mayhew 00:54:14 We’ve got two ongoing ports. One is GADO. We’re working on building an initial version of that. I did a proof of concept for GADO, maybe a year and a half ago, just to prove it out, see how much product reuse we would have between the two. So we’re moving forward with that because we want to be able to offer a, we’ve got a, what I would classify as a community engine and we would think it would be nice to be able to support that. And we are also working on an unreal port as well. So we did a proof of concept with it September last year around there, I did a proof of concept for it. So you’ve got a C# engine and well, expose C# and Unity, and C++ for Unreal.

Phillip Mayhew 00:55:07 And so there’s quite a bit of difference in how we add value. And again, how do we be a light footprint for developers who want to add it to their game? So, there’s still a lot of thought there, but we’re actively moving forward with that. And we want to continue to build out the, on the console market as well. And we want to be synonymous with game testing whether you wrote, whether you were hired to write automated tests on Unity, and then you move to an unreal project. We want to empower those people to be able to do both, it’s job stability for people, let’s how do we help people?

Philip Winston 00:55:45 So GADO is an Open-Source game engine and Unreal is sort of the second or first/second big commercial game engine? Do you have a sense for how many companies you are working with have already embraced automated testing? And they’re just looking for a way to do it, or is part of your sales cycle to convince companies that automated testing is worthwhile in the first place?

Phillip Mayhew 00:56:10 My personal thoughts are that automated testing has been around for so long, you’d be hard pressed to find anybody in technology who doesn’t know what automated testing is. That’s not the hard sell. The hard sell is they’re thinking, yes, we need to do this. How do we do this? Right. We’re going to have to allocate somebody to look at this, somebody to investigate it. It’s like, it’s that speed bump we need to get them over. We’re trying to build out more educational material training quick, start guides, something that allows that speed bump not to look like a mountain, but to look like a speed bump where you’re going to have to allocate a resource to look into this and to use it. So, it’s a project planning and how do we make it work? It’s just helping people feel comfortable that one, it’s not a huge time sync to even get started, which it’s not. So for me it feels like an easy sell, but there’s a lot of things going on their side of things that we don’t see. But I think everybody is on board for automated testing. It’s just how do we get them started.

Philip Winston 00:57:24 How about what developments do you see in automated testing, beyond games or in games, sort of what developments are you looking at, which might impact your business in the developers in the next couple years, anything coming along?

Phillip Mayhew 00:57:38 I think it’s understanding the troubles that developers are seeing now. Issues that developers had to deal with in the past are changing. Maybe they were using in-house engines and that created its own issues. And now they’re moving to commercial engines like Unreal and Unity. And so now they’re dealing with different issues. So how do we modify and continue to adapt our product so that we can help them solve their new challenges with whatever that might be in the future.

Philip Winston 00:58:13 So I think we’re done. Is there anything else you’d like to add that we didn’t cover today and how can people get in touch with you and learn more about GameDriver or contact you?

Phillip Mayhew 00:58:24 We’re always interested to talk to anybody. As a friend of mine says, I’ll talk to anybody. So, if you have questions or you’re interested, or your professor who’s teaching testing, and you want to learn how to teach your kids about game driver or just testing in general, we’re happy to meet. We’re happy to talk with. Anybody can reach us through our contact information on our website GameDriver.io. And we wish everybody happy, happy testing going forward. And we hope you’ll embrace the continued revolution of automated testing.

Philip Winston 00:59:00 Thank you. That’s a good place to end and I will put some links in the show notes for more information. This is Philip Winston for Software Engineering Radio. Thanks for listening. [End of Audio]


SE Radio theme: “Broken Reality” by Kevin MacLeod (incompetech.com — Licensed under Creative Commons: By Attribution 3.0)

Join the discussion

More from this show