Search
Jeffrey D Smith

SE Radio 457: Jeffery D Smith on DevOps Anti Patterns

Jeffery D Smith, author of Operations Anti-Patterns, DevOps Solutions, discusses anti-patterns in software development organizations and how they can be fixed. Host Robert Blumen spoke with Smith about why he chose to focus on what can go wrong; fixing things that are broken in your organization; information hoarding; why important information about systems can be centralized or otherwise difficult to locate; processes for information sharing; why documentation tends to lag or be incomplete; “culture by decree”; gaps between stated culture and how organizations work; evolving culture; DevOps as a culture shift; alert fatigue; costs of both under- and over-alerting; what is the right level of alerts?; how to calibrate alerting for new products with no history of normal; cognitive biases in alerting; and tooling – do organizations under-invest in tools? What is the cost of not having tools?


Show Notes

Related Links

Transcript

Transcript brought to you by IEEE Software
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected].

Intro 00:00:00 This is software engineering radio, the podcast for professional developers on the [email protected] se radio is brought to you by the computer society. I, your belief software magazine online at computer.org/software

Robert Bulmen 00:00:16 For software engineering radio. This is Robert Bluman. I have with me today. Jeff Smith, Jeff has been in the technology industry for over 15 years, has managed dev ops transformations at the ad tech firms Centro and the online ordering platform grub hub. Jeff is the author of the recently published book operations anti-patterns DevOps solutions, which will be the topic of our conversation. Jeff, welcome to software engineering radio.

Jeffrey D. Smith 00:00:48 Thanks Robert. Thanks for having me looking forward to it. Great

Robert Bulmen 00:00:50 To have you here. Your book has 12 chapters of which 11 focus on a specific anti-pattern. We will be delving into some of those anti-patterns, but I want it to talk about high level. Why did you choose to focus on how things go wrong?

Jeffrey D. Smith 00:01:07 So that was actually pretty organic process throughout the writing of the book. So the inspiration for the book was originally writing a DevOps book for people that aren’t in ideal situations. So a lot of the material that was out there was great for small organizations or was written for C-suite executives that had the power to restructure the org chart and make all these broad sweeping changes. But there were a lot of individual contributors and lower level managers that I spoke with out in conference circles and things like that, where they wanted this change, but didn’t feel empowered to be able to do it or make it. So that was rigidly the Genesis of the book. And then through out the writing process, we found that it was just a little more engaging and entertaining to establish these sort of anti-patterns because they resonated with people so quickly. And it was a very good hook for them to say like, yes, I feel this pain. I understand this pain, but I may not understand where it’s coming from or what the sort of Genesis of that is. So, you know, we noodle around with that ideal a bit and it really started to work out. So that’s kind of how the approach came about.

Robert Bulmen 00:02:21 Just answered this, or perhaps you can expand on it a bit. These anti-patterns when they exist in organizations, are they considered pain points or is it simply, that’s how we do things and people don’t think there must be a better way

Jeffrey D. Smith 00:02:36 That’s on your perspective. Usually sometimes anti-patterns exist for the benefit of one group at the detriment of the other, right? So if you’re on one side of the spectrum, you may just say, oh, this is just business as usual. But if you’re the one sort of being crushed by this, you know, you’re much more apt to want to dig into the causes and actually fix this thing. So for example, from a operations perspective, it may feel very normal and natural that devs have zero access to production because devs break things, they just can’t be trusted. But from a dev perspective, they’re like, well, there’s a lot of things I can’t do on a day-to-day basis. That would be much better handled and much more direct and effective if I were able to do that. Right? So your perspective on if that’s an anti-pattern or not different, but I think if you step back and look at the health of the organization, organizationally people will look and say like, you know what, this is not completely effective, the way we’re doing it. And the truth really lives somewhere in the middle. So how do we go about, you know, getting the two sides together and, you know, solving this problem, which is really a common problem because you know, the operation staff doesn’t want to be tasked with constantly doing things that they’re not adding value for. And developers don’t want to be stuck waiting behind some arbitrary gate for someone to type a bunch of commands that they’re going to dictate. And the person typing the commands may or may not fully understand what they’re typing. Anyway,

Robert Bulmen 00:03:58 Perhaps you’re familiar with the famous quote from Russian literature that all happy families are alike, but every unhappy family is unhappy in its own way. Do all organizations have the same issues or does everyone have a unique set of problems?

Jeffrey D. Smith 00:04:17 So, you know, this is definitely a point of controversy. I feel like most organizations more often than not have the same problems, right? It might be spiced a little bit differently, but you know, at the core it’s the same problems. And when we talk about that, you know, I’m really talking about the larger aggregate of companies that we don’t see in blog posts every day, right? Google has its own set of problems. Apple has its own set of problems. Netflix has its own set of problems, but because they have such an out-sized voice in the industry, we forget like how uniquely rare those companies are for every single Netflix, right? There’s a thousand local health insurance companies that are trying to process claims on a daily basis. So I think as an industry, we sort of give this outsize weight to these companies that do have unique problems due to their scale. But if we step back a little bit and just ask, like what folks are doing day to day, I’m guessing most of us have, you know, a lot of the same problems.

Robert Bulmen 00:05:17 There is a whole genre of books and I’m not sure if you’d put yours in this or not, but they’re called the XYZ cookbook or I look up, okay, I want to format a date string in Python. I opened the chapter to that. Here’s how to do it. Your book does read a bit like that in that I have this problem, let’s understand what it is. Look at some ways to fix it is DevOps transformation. If I identify all the pain points and fix them one at a time, am I good? Or is there a bigger message or philosophy that underlies all of these issues that needs to be grasped?

Jeffrey D. Smith 00:05:57 I think there’s an underlying philosophy. And even if you look at the book, you’ll see themes sort of reoccurring over and over again. And that, you know, one of the big things that we just have to get better at is understanding the actual problem that we’re trying to solve and communicating amongst all of the stakeholders to make sure that that’s what we’re solving for, as opposed to sort of, you know, dealing with these past grievances or fiefdoms, you know, that sort of what continues to fuel the beast for lack of a better term. So I do believe that there is a lot to be gained simply by taking some of the core principles of, you know, how do we identify a problem and how do we work together to solve it and applying those in any organization, you know, common goal setting is a perfect example.

Jeffrey D. Smith 00:06:44 There aren’t too many organizations that can’t benefit from having a common set of goals and mobilizing a team to work towards that single goal. When you look at like the OKR strategy that was popularized by Google, that’s really what it is at the core, right? This is the goal that we’re trying to accomplish. And we want to make sure that all the teams are focused and working towards that goal. So if that can work and, you know, have impact on a company, the size of Google, imagine what it can do for smaller organizations, as long as we’re all rallying around the same piece. And I think a lot of the rift that is occurred between dev and ops has historically been because there’s a separate set of goals, a separate set of metrics, and the two aren’t forced to overlap in any way that brings about meaningful collaboration. And in fact, just sort of fuels animosity, then

Robert Bulmen 00:07:34 Anti-patterns are instances. And then by understanding the pieces, you can understand the bigger picture,

Jeffrey D. Smith 00:07:43 Right? Absolutely. Because like, you know, as engineers, we’re accustomed to seeing patterns of problems, right. Even when you’re going through school, you’re not dealing with the actual real world problem that you’re going to experience once you graduate and you’re in the field you’re experiencing and dealing with an abstraction of that pattern, but then being taught on how to recognize and say, oh, you know, this is just another manifestation of that problem. Once you begin to recognize those patterns, you start to see it, even in human interactions. It’s something that, you know, like I always talk about when I’m learning a technology, if you learn the underlying technology that it’s based on learning that particular implementations are really easy, right? If you understand how relational databases work, whether it’s Postgres or my SQL or SQL server, you know that there’s going to be a transaction log, you know, that there’s going to be some sort of buffer pool, right? And the syntax and names around those things are just sort of flavor techs for that implementation. I think it’s the same thing with organizations, right? Once you understand the sort of patterns that you’re accustomed to seeing that fail in an organization, it’s quick to identify and come up with remedies.

Robert Bulmen 00:08:47 Jeff, we’ve been talking in general high level for a bit here. I want to now talk about some of these anti-patterns that you cover as time permits in the remaining part of the hour. Let’s start out with one known as information hoarding also known as only Brent knows. Can you explain what this is?

Jeffrey D. Smith 00:09:08 So Brent was a reference to the, a very popular character in the Phoenix project book where he is that resource that knows everything in the organization, right. And from the technical perspective, and when things go bad, it’s like, oh, Brent always fixes that. And I think every organization has had, or has a Brent in their midst, right? Sometimes it might even be someone that’s listening to this podcast right now. And we’ve historically assumed that like Brent could be an information hoarder as a means of survival, right. As a means of self worth. But in reality, it’s usually a little bit more nuanced, right? It’s a combination of things. It’s time to disseminate that information. It’s who do you disseminate it to? So I wanted to create this chapter to sort of highlight the idea that you could be an accidental information hoarder, right? Internally, you don’t see yourself as sort of keeping all of this information personal and close to you, but you’re not actually doing anything to get the information out of your head and into the broader audience and into the broader community.

Jeffrey D. Smith 00:10:12 So first you’ve got to recognize that you’ve got a problem before you can actually solve it. And I’m guilty of this too. You know, you do a thing so often that, you know, you just, you might forget to document it or you might forget to share it, or you might have the curse of the expert where you just assume that everyone knows how to do this. Oh yeah, of course. You guys know how to do this. You simply do that, do that, running this command and then boom, you know, Bob Drunko. So that chapter was really about focusing on different ways and techniques you can do to sort of get that information out into the world, out into your teams so that you aren’t the single point of contact. I think another interesting facet of that too, is the idea that if you are that source of truth, every question that comes to you, the answer is always filtered through your own perspective. Right? So if someone asked me a question that only I have information or data about the answer that they receive is inherently going to be soiled by my perspective, which could be limiting, right? Maybe I don’t have the full context of things. And I’m self-selecting information that I think is helpful and leaving out other bits of information that I think are not, but who knows it could be, you know, the thing that breaks the case wide open for whoever I’m explaining it to

Robert Bulmen 00:11:25 Hoarding sounds like a psychological disorder or antisocial behavior. Like those cat ladies who have 25 cats or people who own every issue of a newspaper they subscribed to for 50 years, is it really a disorder or is this a natural process? Because people have expertise in different things and they’re driven to meet deliverable.

Jeffrey D. Smith 00:11:48 You know, I think it’s a mix of things. I think it is sort of like natural for humans to want to continue to collect stuff, right. And, you know, to continue to accumulate, whether it be knowledge, wealth, toys, you know, whatever, there’s just this sort of inclination to always want more. And I don’t know how much of that is a byproduct of society and how much of it is just, you know, human nature. So I think it can quickly delve into the realm of disorder, just like anything where if you sort of take it too far. And I think that’s how we have sort of characterized historically this idea of information hoarding that someone is doing it deliberately and specifically to a mass, some level of, you know, power influence, whatever. But I think that that’s more frequently than not a, um, a rarity

Robert Bulmen 00:12:40 I haven’t observed. It can be really difficult to get engineers, to write documentation, possibly it’s many people’s least favorite task. Why is it hard to get people to document things?

Jeffrey D. Smith 00:12:53 It’s hard because like, you know, writing is hard and it may not be someone’s preferred method of communication. So I think that plays a big part in it, right? Because to be able to sit down and write something in a way that is coherent and make sense, one, it takes a lot of energy. That’s something me and my wife joke about a lot. Right? If you want to write a well-crafted email to a series of engineers or executives or whatever, it’s not something you just sit down and bang out in five or 10 minutes, right? Like you really need to put thought through it. You need to make sure you’re telling a story that is, you know, engaging someone so that they’re not looking at the subject line and then just, you know, immediately sort of moving on and filing it away to read later in the depths of email.

Jeffrey D. Smith 00:13:39 So I think writing is a lot harder than we actually give it credit for. And then writing good documentation is harder than we give it credit for. And we don’t really teach people how to write good documentation. Right. It’s sort of assumed that, you know, oh, well, you know, you went to an English class, therefore you know how to write good technical documentation. No, it’s an entire discipline, right? Like we have people whose entire job is to write technical documentation yet, for some reason, we’re going to give this task to an engineer and just assume that they’re going to know how to do it. So like anything, if, if you’re not particularly good at it, you don’t like to do it. But then it’s also like, how is it valued in the organization? Is it something that is actually prioritized and a given agency, if you are giving me, you know, 10 assignments, and then on top of that 10th assignment, you say also write documentation.

Jeffrey D. Smith 00:14:26 Well, guess what, in my list of mental priorities, documentation is always going to be at the bottom. And if you keep shoving things on top of that, then you know, you are sort of silently communicating where documentation exists in the value chain in the organization. So I think as leaders in the organization need to make sure that they’re prioritizing documentation the same way they prioritize other work, right? If you’re doing a sprint type of workflow, you know, there needs to be a ticket that says like, Hey, write some documentation around process X. And that work needs to be accounted for us too often. We leave it to the fringes of the workweek to try to get that documentation done.

Robert Bulmen 00:15:05 The concept of service ownership is gaining a lot of traction where the people who built the app are on it. You may be on an rotation for on-call and you may be at an hour where other team members are not awake. You need to fix possibly a range of different things that might occur on your service, including things you didn’t build. How important is documentation for the on-call and response process

Jeffrey D. Smith 00:15:33 It’s important, but even more important is that it’s accurate. And in the middle of the night, there is nothing more dangerous in my world than incorrect, old, outdated documentation, because you can make a ton of really bad decisions based off of that. So I feel it’s extremely important to have something, to have someone to go or make a decision off of. But I think we also need to be cognizant of what that means in terms of our responsibility to keep that documentation up to date. So if you’re in a scenario where you don’t know if this documentation has been updated frequently, you’re probably gonna end up paging out to that other person’s team anyways. Right? So there is a more holistic decision around what documentation looked like in those worlds and how it impacts on-call. There may be documentation that specifically says, if there’s a problem with this system, you just need to page out because it changes too frequently.

Jeffrey D. Smith 00:16:31 Right? And that’s another common complaint I hear about documentation. You know, things change so frequently that we just can’t keep up with the documentation. And then suddenly, you know, we’ve got this really outdated piece. So maybe the documentation is just a, you know, this is the on-call schedule to page out to if there’s an issue here, you know, it’s a delicate balancing act, especially with these service oriented teams where they sort of got carte blanche to iterate on a thing as frequently as possible because they don’t necessarily have the ripple effect or the, the impact of, you know, contributing to a huge monolith where there’s 10 or 12 cooks in the kitchen.

Robert Bulmen 00:17:06 We’ve talked about it. People may not want to write documentation or feel like they have time or be good at it. In the agile methodology, you have this concept definition of done, which is when I say my task is done, what exactly am I saying that I did? And you could add documentation as one of the components of definition of done. Is that going to be an effective strategy for really getting the documentation done?

Jeffrey D. Smith 00:17:36 So again, I think that boils down to the organization, right? The fixes for documentation can be relatively easy as long as your organization stick to those principles and guidelines, right? So if we say that this going to be the definition of done, we have to meet it, right. We have to actually say like, yes, this is the definition of done. You don’t have documentation. So this isn’t done and we’re not going to move on and we’re not going to add work to your queue until that piece is done. But it really takes an organizational value system that says, you know, this is important and we’re not gonna move on until this piece is done. I think, especially in the agile methodology, I think we give weight to words, but don’t often put the systematic components behind that to make sure that these things stick. So it’s one thing to say, you know, it’s one thing to say, we value automated tests. It’s another thing to say, you cannot merge a change without 98% test coverage, right? So that’s the system sort of enforcing the values that’s being defined by the organization. So, you know, listing documentation is done as part of the ticket works. As long as you have a systematic way to enforce that value,

Robert Bulmen 00:18:45 Start out talking about the problem of information hoarding. And we’ve been talking quite a lot about documentation, which is certainly not the only way to share information. What are other ways that information can be discorded and shared among a team?

Jeffrey D. Smith 00:19:02 I think presentations are becoming extremely popular in organizations now. And some people prefer that, right? And the thing I like about presentations is there’s so many different ways that you can do it. You can do something as casual as like a lunch and learn. I know at central, as an organization, you know, we would do these lightening talks during lunch break, someone would just come in and spend five minutes, just sort of talking about a thing. And you know, you may not get a complete understanding in a five minute conversation, but it sparks the juices to initiate longer conversations, right? People begin to pull information, right? Because that’s another thing. How do you incite people to pull information as opposed to constantly having to be pushed? So, you know, that little five minute talk might stir up a bunch of conversations amongst engineers to say, Hey, I want to learn a little bit more about this.

Jeffrey D. Smith 00:19:48 And then the five minute lightning talk turns into a 30 minute lunch and learn with a bunch of people gathered around laptops, sort of hacking away at things. So another nice thing about presentation is especially in the age of zoom for people that aren’t super comfortable presenting in front of a crowd, you can prerecord it, right? You sit at home and you go through a few iterations. You, you nail down how you want the presentation to go, and then you throw it up on your confluence or your Wiki site. And you say like, Hey guys, you know, I just did a presentation on this and would love for you to take a look at it and give feedback. That’s another way to sort of disseminate information. So, you know, I always say, you know, try to get creative with how you go about it. I think another great thing is training someone that hasn’t done or hasn’t worked with a particular system because not only does it enhance your knowledge of it, because whenever you have to teach it, it strengthened your understanding, but it is a great hands-on method for folks that, you know, may not absorb information great either through video or documentation, some people really needs that sort of hands-on experience.

Jeffrey D. Smith 00:20:51 So I would say, do not be afraid of different ways to communicate documentation or knowledge, I should say primarily because different people respond differently to different modes of learning. And I think for so long, we have placed a singular value on documentation written documentation. And haven’t really explored the breadth of different learning options.

Robert Bulmen 00:21:15 Uh, skill is something that you can do. And in some cases, if you tell me how to do something, I might say, yeah, I got that. I could do that. But in many cases, I’m meaning to do it myself. You’re going in a direction of, instead of we send all the, build the tickets to Brent, because he knows how the build works. We would intentionally send some of the tickets to other people on the team who don’t know how to do that. And then they would have to ask Brent for help. And the skill would get shared in that.

Jeffrey D. Smith 00:21:47 Absolutely. But then the other thing that you have to accompany with that is empowerment, right? So I think that is commonly seen is you might even have people that are interested in learning how to do that, but simply don’t have the access. And then it becomes an exercise in frustration and people are like, you know, look like either, I’m going to learn how to do this and support this, and I’m going to have the ability to do it, or I’m not. And I’m just going to sort of disengage and continue to Chuck everything over to Brent. So I think there’s a ton of value in finding those folks that are interested and giving them the ability to do what they need to do. So I’ll give you an example at Centro at Centro, when I came on the QA team sort of own the CICT process, and there was one guy that was responsible for one guy, he was the only one that understood the build system.

Jeffrey D. Smith 00:22:35 So when he left, it fell to my group just because no one else was there to sort of catch it. And we were the only ones that really had access to the system. As people started getting more and more interested in sort of defining and understanding the build process. They kept running into these hurdles where it’s like, well, I can’t log on to the Jenkins worker slaves to know what version of Ruby actually got installed, or what libraries installed, you know, how do I do this? How do I do that? So we said, how do we just start, uh, openness up and give people more access to Jenkins, give them more access to these nodes. And you know, it’s a slow process because there’s obviously, you know, things that you have to be concerned about. But at the same time, we started to enable a couple of developers to say, Hey, you can manage the Jenkins plugins, right?

Jeffrey D. Smith 00:23:21 Because, you know, from an operations perspective, I really don’t care what Jenkins plugin to use. As long as they meet particular, you know, security requirements and things like that. I’m not going to be the one in the way dictating that. So with that access came a sense of ownership. And you know, now they’re sort of leading the charge and how we need to rearchitect CIC, but they’re also claiming the ownership of it. Right? So now that they are owners of it, they are much more apt to teach other engineers about it, right. They’re getting their recruiting help so that it’s no longer just one or two people that understand it. So I think it’s an important process to sort of foster that, but with that needs to come, the ability to actually implement an effect, change

Robert Bulmen 00:24:04 The next one, I want dresses culture by decree, start out with, what is that one mean

Jeffrey D. Smith 00:24:11 Culture by decree is basically when you sit up and say, this is what our culture is. When you write it down, you might even get a nice plaque for it. Right. And put it in the kitchen or in the break room. And it’s got some really flowery language about how awesome your organization is. And then when you talk to people in the organization, they don’t recognize it. They say, what company are you talking about? Oh, that’s us. Yeah, no, I wouldn’t have understood that. You know, Enron is the common example, right? When you read Enron’s, you know, mission statement, you’re like, that sounds like the kind of company I want to work for.

Robert Bulmen 00:24:43 That doesn’t sound too good. Really? Why is that a problem?

Jeffrey D. Smith 00:24:48 Well, it’s a problem because, well, for starters, the culture that you defined, it doesn’t exist. Right? So then you run into this sort of like broken reality of, I think the organization is one way because I have it on a plaque, but in reality, the culture is toxic and it is the creation, or it is the sort of the underpinnings of a lot of issues that are reverberating throughout the organization. Right. And I think, especially as you get higher in the organization and you start dealing with senior leadership, if senior leadership thinks they have this great, fantastic culture, they’re not addressing any of the issues or problems that are created by that mistaken reality. And as a result, things sort of sour. So what I mean by that is if you think you have an organization that is open, that is transparent, that is friendly to unique and diverse voices.

Jeffrey D. Smith 00:25:41 You may be surprised to find out, but you know, people are stifled and you’re not getting the best views. You’re not getting the best opinions because those people feel intimidated. You know, I think a more recent example of this outside of technology was when in Obama’s latest book, a promised land, he talks about how he had inadvertently created a hostile environment for women and didn’t know it. He was unaware of it until people brought it to his attention. So just imagine the impact that, that has when you have an entire organization of women that, you know, don’t feel comfortable speaking out about particular things or problems, or, you know, even raising those viewpoints. So culture by decree is a dangerous trap because you need to understand what your culture actually is in order to fix it

Robert Bulmen 00:26:27 Is the problem that your culture doesn’t match the mission statement on the plaque, or is the problem. If you have a dysfunctional culture, regardless of whether it matches what you have.

Jeffrey D. Smith 00:26:41 So it depends how you look at it. I think it’s that your culture in your actual mission statement match only because that’s what you’re selling to potential employees, right? And there’s never a scenario where if you have a great culture, you don’t want to advertise that, right? Everyone, no one is like, eh, we’ve got a great culture, but we try to keep it on the low. We don’t want anybody to really know about that. So it’s important that they match it’s expecially important though, when it’s dysfunctional for all the reasons that we have sort of already sort of talked about, right? So I would say you want your culture and your mission statement or whatever to match so that as you’re out there recruiting and talking with people, you can lean on that. But more importantly, I think it’s important that you have concrete ways on how you can describe and talk about how that culture is sort of reinforced because, so in technology, we have a huge fight over talent, right?

Jeffrey D. Smith 00:27:38 So we’re either going to have an arms race in salaries, or we’re going to have to start talking up the intangibles of an organization and every interview. And you’re probably the same way Robert, any interview I’ve ever been on, the question comes up, what’s the culture light. And no one ever says, oh, the culture is kind of a dumpster fire. You know, no one ever says that everyone talks about, you know, oh yeah, it’s pretty solid. You know, this, that, but like, how does your organization ensure that that culture is happening? How was that cultural reinforcement happening? What are the sort of things built into the system that allow that to happen? So, like, for example, at Centro, you know, diversity equity inclusion is one of our top goals for 2021, right? Like most organizations, but how are we demonstrating that that’s actually important. We can say, oh yeah, we’re doing DEI.

Jeffrey D. Smith 00:28:30 That’s great. But to be able to talk about the community groups that we’ve put together to talk about, you know, how we’re sort of funding those community groups and their initiatives to talk about how we’re putting goals and metrics around hiring, right? Those are concrete things that are systematic, that we’re doing that sort of reinforces our cultural values and what it is that we’re trying to accomplish. So, you know, culture by decree is sort of useless. We have to understand what is an organization doing to make sure that those cultural values are operating in the organization. Part,

Robert Bulmen 00:29:03 The book is about the adoption of dev ops. Would you say DevOps is a culture.

Jeffrey D. Smith 00:29:09 It has to be a culture or at least in part a culture, right? And it’s really a culture of collaboration. And you know, we’re going to get to a point where we’re adding all types of letters to the DevOps, right? We’ve got dev sec ops dev sec, fin ops dev sec, fin marketing ops. Right. But in reality, what we’re really talking about is collaboration and cooperation, right? And that is a cultural thing. It’s very easy for teams to sort of go in their silos and come up with their own plans and missions without coordinating with other groups. And then coming out and saying, here we are, this is what we’re doing. Right. But it is definitely a culture to default to thinking, who else do we need to involve? And how do we get their opinion? How do we get their viewpoint? So building that culture, you know, obviously it starts from the top, right?

Jeffrey D. Smith 00:29:56 Like how do we, it starts with leaders. I should say, I shouldn’t say it starts from the top. It helps when you have leaders in place, you can have individual contributors that have quite a bit of sway or respect in the organization, start to foster that behavior by saying, you know, oh, what is dev, think about this, right. What does ops think about this? How do we get their input? Especially in areas that, you know, may not be the normal purview of that team. Right. We at central do it with interviews, right? Like I just did an interview yesterday, but we brought in an it development engineer to interview an ops person. Why? Because we want their perspective. We want to know how they feel, they’ll be working with this person. Is it going to be easy to work with this person? Does this person have the skills that a developer would expect an operations engineer to have?

Jeffrey D. Smith 00:30:41 And those are perspectives that, you know, I think I could have, but you know, why risk it when I can actually just bring in a software engineer to do that. So I think that’s all part of the sort of dev ops culture movement. And it’s unfortunate that it gets tagged with something like specifically like DevOps, because it’s really just good organizational practice. It just so happens that it is sort of had this Genesis with these two historically warring factions between dev and ops. But the adoption of dev sec ops dev sec, FONOPS all this stuff, right. Just sort of further proves the point that this is a universal problem. Going back to the original question we had, right. Where it’s like, you know, it’s the same problem. We’re just recognizing patterns and applying the fix

Robert Bulmen 00:31:23 Division has many of these anti-patterns and you have an adopted DevOps culture then does that mean you really need to improve your culture in order to succeed for the organization to succeed?

Jeffrey D. Smith 00:31:36 I firmly believe so. Right? Because like, if not, it becomes action without conviction. Right. And if it’s not built in to the culture, like, I don’t know, like process without purpose feels empty, right? It’s like, you know, when we talk about pull requests, for example, right? The poll request process survives and is valued because people understand what it is that we’re trying to accomplish. Right? It’s not big brother. Right? It’s we want to make sure that things are meeting standards, that the changes are being communicated. So multiple people have eyes on it. We recognize that a single developer can make a mistake that another developer might catch. We believe in the process. So poll requests typically, aren’t sort of like these hostile acts right now, when you get into an organization where pro requests do meet resistance and animosity, right? It’s usually because they don’t understand what it is that the pull request is actually trying to accomplish, or they feel that it is not accomplishing what they’ve set out for it to accomplish.

Jeffrey D. Smith 00:32:43 But when you go into an organization and you say, you know, pulls requests are going to allow us to better communicate with each other, it’s going to allow us to move faster in some regards, right. As we can begin to sort of automate some things, because we understand how the process is going to flow. So we know that, oh, you know, this system can automatically check these things, these things, and these things. And that might help us move a little bit faster. So all that is to say, I think process and tools are great, but we still have to understand why it is we’re doing what we’re doing and understand that these are just enablers of that mission. Right. But if Kubernetes one day was to implode, we still want the mission of developers having a bit more control and consistency in their environments. So just because Kubernetes has gone away, doesn’t mean we’ve abandoned that goal. We just find the next technology that’s going to do it.

Robert Bulmen 00:33:35 Big part of what you’re saying. There is tell people here’s a new process. Everybody go do it. People are not really gonna do it unless they understand why you need to sell them on that.

Jeffrey D. Smith 00:33:48 Absolutely. I had a CTO very early in my career and he told me something that stuck with me forever. He was like, Jeff, you know, when we’re making big changes, the order is people process tools. You have to win. You have to implement in that order. You’ve got to get the people on board with what it is you’re trying to do. You’ve got to define the process that you’re going to follow. Then you find the tools that are going to implement that process. And so often we sort of start at the end of that, right. We pick a tool and then we define, oh, okay. And yeah, now that we’ve got Kubernetes, this is the process that we’re going to follow. And now that we’ve got the process defined, we’re going to go tell the engineers, Hey, you’re going to be on call for the rest of your life now.

Jeffrey D. Smith 00:34:28 Right. And that doesn’t work because, you know, from an engineering perspective is change is happening to you, right? Not with you, it’s happening to you. So, you know, that order of things is very important. And then once you have the people on board and you’ve got the process defined, picking a tool becomes a lot easier, right. Because you know what it is you’re trying to accomplish. So when it doesn’t have this whizzbang feature, you’re like, well, it doesn’t really matter because that’s not the process that we’re going to follow. So we don’t care. So it really empowers you in a way to make better decisions about the technology choices you have,

Robert Bulmen 00:35:03 Jeff let’s move on and address another topic. One that stood out to me was alert. Fatigue. Perhaps this time we could start with a story or an example, illustrating that,

Jeffrey D. Smith 00:35:15 Oh man alert, fatigue. Okay. So I’ll give you an example from just this morning, actually, we’re moving over a bunch of our synthetic checks to removing our synthetic checks over to Datadog for a number of environments that we’ve got. Right. And one of our staging environments, basically we sleep at at night and wake it in the morning, basically save money in AWS so that we’re not spending compute time on a system that no one’s using. So the synthetic check is supposed to stop, shut down prior to us shutting down the environment and then start back up after we start the environment back up, as we’ve been tinkering with that, we’ve been sort of ignoring the fact that the timing is a little off. So the synthetic alerts fire, even though everything is fine, right? It’s just a matter of timing. Well, you know, a day of work becomes two days of work becomes five days of work.

Jeffrey D. Smith 00:36:10 Then next thing you know, this synthetic alarms are firing. It’s a real problem, but guess what? We’ve ignored it because it’s like, oh yeah, it’s just that thing. And those synthetic alarms are just going to fire, but no, lo and behold, uh, you know, staging OTU was actually broken. It’s actually down. And, you know, we wasted a ton of time because, you know, because of the alert, fatigue from the synthetic checks, we had ignored it. So it’s important to sort of stay diligent about the alerts that go off primarily because it can be so easy to ignore and forget. And then suddenly not only is it causing a lot of anxiety on your on-call engineers, but it’s also eliminating the usefulness of the alert because the purpose of it is to inform someone that there’s a problem so that they can take action. If they’re not going to take action right away, then you know, the alert has sort of failed its purpose because it’s still depending on, you know, some QA engineer going in and saying, Hey, staging, no is not available. And it’s like, oh, oops. Yeah, we got an alert on that and didn’t do anything about it.

Robert Bulmen 00:37:12 It could have over alerting or under alerting and their costs either way. How do you strike a balance?

Jeffrey D. Smith 00:37:20 So your mileage may vary with my answer. I’m going to put that out there now. Right. But I would say I tend to focus. I would rather be five minutes late to an alert that’s real than to respond five times to an alert that’s bogus. Right? So as a result, a lot of the alerting that we have is set up to know with the level of certainty that this is an actual problem, because now when I wake you up, you say, oh, there’s something actually going on versus getting the alert, looking at it and then going back to sleep. And I think we’ve all had that scenario where you’ve got that alert that has, uh, a funky pattern. So when you get it, you don’t respond right away. You wait for the second alert and then you go, oh, okay. This is real now you’re reacting.

Jeffrey D. Smith 00:38:11 Right. I don’t know that there’s value in that in terms of just from like a engineer happiness perspective, because it just adds this constant level of stress where you’re like, you know, oh honey, I just got page. So we can’t leave the house because I might get paged again. Uh, but it may be nothing. So for me, I try to say like, Hey, when an alert fires, let’s be sure that it, that it’s a real thing. And in conjunction with that, when it fires, how do we make sure that we’re giving the engineer that gets page some context as to what they should be doing or why this alert matters. Right. So nothing is worse than a CPU, high utilization alert. Right. I got a CPU’s high sounds like we’re getting our money’s worth. Right. Sounds like it’s doing what we pay it to do.

Jeffrey D. Smith 00:39:00 So why is this a problem versus, you know, log-in page loads are extremely slow correlated with high CPU utilization. It’s like, oh, okay. Now there’s an actual user impact that I understand. And I know that this is related to high CPU utilization. I think one of the things that has been sort of an eye-opener is when people realize, or at least old people realize that we’re not bound to character text message sizes anymore. Right. So I remember when I first started, when you had to write an alert that would page, you had to be really crafty with your message, because you only had like 60 characters or whatever, you don’t have that anymore. Right. It’s going to be an email. It’s going to be a text message. So, you know, give some description, give some color, understand, you know, maybe even describe like what’s been caused in the past, right? Like high replication lag is high. This could have impact on, you know, the basis data mark downstream process, check this, this and this to verify if it’s an actual, you know, serious issue and that, you know, an engineer gets that and they go, oh, okay. I know what to do.

Robert Bulmen 00:40:00 If you’re a little bit trying to bias toward actionable alerts, there is this cost where the system was down for five minutes. Customers could not perform certain tasks. What I think you’re calling attention to is if you create a lot of fatigue among your people, whose job is to keep the system up and running and that impairs their ability to do their work, or they, uh, are spending a lot of time, which is also valuable to the business, doing nothing that that is another cost. Is that a fair statement

Jeffrey D. Smith 00:40:35 That is extremely accurate? And you know, I feel that a lot of companies sort of discount these soft costs, right? I think you hit the nail on the head. When you say there is a cost with people reacting to a problem that doesn’t actually exist. You know, an example, you see a lot in the security space, right? Where there is some generic security alarm that fire’s saying like, Hey, we’ve noticed something weird on host X, right? So you spend an hour, hour and a half just to find out, Nope, this is normal operations. And we’re going to have to create an exception for it. Right. But that was still an hour and a half of an engineer’s time. That’s not working towards feature development. That’s not working towards, you know, stabilizing the system. So there is definite costs associated with that. And I would say in most cases depends on your organization in this case. But like the number of times you have a false positive page is probably higher than the number of times that that sensitivity is actually saving you from certain disaster.

Robert Bulmen 00:41:38 You said something I hadn’t thought of before. I think it’s true in the book, you said it is nearly impossible to convince people to remove a defined alert. Do you think that is some kind of a, one of those cognitive biases where people are a little bit irrational about something? I think

Jeffrey D. Smith 00:41:56 Totally think it is. And I think it’s one of those things where it’s difficult to think that there could have been a thing that might have prevented an issue that you’re getting rid of. And a lot of times an alert has history, right? Like you might define an alert when you encounter a problem. And then over time when the problem happens, you have a very rudimentary understanding of it. So you come up with this generic alert of like, you know, if memory climbs by 20% in a two minute period, then send an alert, right? And that was really based on your current understanding of the problem. As time goes on, you get a better understanding of the issue. But this alert is sort of canonized with this mission statement around the previous alert. And it’s like, well, you know, this is the thing that helps us prevent issue XYZ.

Jeffrey D. Smith 00:42:48 When in reality it’s like, well, you know, it might detect that situation, but it also detects 20 other situations that aren’t really helpful or useful to us. So how do we remove that? And people have a lot of emotion around that. A lot of fear, honestly, a lot of times it comes from management, right? Because management never wants the question, you know, well, why didn’t we know that this was a problem and the reality, no one wants to hear the nightmare, that there are so many different ways that we can fail that we’re probably not going to catch them. All right. So we have to have this balance between, you know, understanding our systems and alerting properly and burning our staff out because they’re constantly reacting to alarms that aren’t helpful and or useful. That’s another question that comes up, at least in the ops space, what’s on call like every interview I’ve ever done. What’s the on-call like, is it a lot? Is it a little, how often are you paged? And you know, when people hear our philosophy that resonates with them, because everyone has been on the other side of the coin,

Robert Bulmen 00:43:52 You may be going with this. And tell me if this is what you’re saying is you might like to have every possible thing that can go wrong catalog and have a run book that you link to in the alert does not really possible. You need to draw a line between routine failure modes. And at some, somehow you have people with skills and you’re going to punt it to the on-call person. Say you got to use your skills and figure out what’s going on because we can’t anticipate everything. Exactly.

Jeffrey D. Smith 00:44:23 You’re not going to anticipate everything. And the minute you did anticipate everything, the next release and it, or creates a new failure mode, right? So if you’re in a high release environment, you know, it’s just a matter of time before a new failure mode gets introduced that you weren’t prepared for. So you have to have a process in place for, you know, identifying, you know, sort of routine failures. You have to have a process in place for identifying things that are worth investigation. Right? So here’s another thing that we talk about in the book is not every alert has to wake someone up, right? So if you have a situation where you’re like, you know, I think this particular combination of circumstances is curious, right? Turn it into an email alert and then someone can look at it and investigate it and decide, oh yes, this is something that should be elevated to a page or no, this is something that should be continued to be a low-level alert. I think making that distinction is extremely important as well, because it gives you an opportunity to sort of get a sense of what the alert volume will look like. Right? So if you’re, and the minute you say, the minute you create your email rule to filter out that alert and put it in a folder, that means it’s not useful. And, uh, you know, it’s time to get rid of it

Robert Bulmen 00:45:36 In different ways about where do you set the bar? What’s actionable, what’s urgent. When you’re launching a new system, you don’t have the experience of what is normal or abnormal. How do you go about setting up a decent set of initial alerts?

Jeffrey D. Smith 00:45:51 So a technique that I’ve used, I actually stole from the manufacturing industry and my wife, who’s an industrial engineer, but in manufacturing, they use this thing called failure modes effects analysis. And the goal of that is essentially to map out this process that you’re doing right in manufacturing, it’s, it’s actual physical construction of things, but for us software, it’s, you know, various handoffs interactions with API databases, things like that. So you map this process out and it could be pretty exhaustive, but you then say, all right, what could go wrong in this? Where are the things that could break? And you start to highlight those out. Once you’ve highlighted those out, you rank them by three different categories, right? The likelihood that it’s going to occur, right? The severity or the impact of if it occurs, how bad is it and occurrence, severity and detection.

Jeffrey D. Smith 00:46:46 What is the likelihood that if it happened, we would detect it prior, or we could detect it prior to a customer being notified. You take those three numbers and you multiply them together. And that gives you your risk priority number. And from there, you can sort of work the risk priority number to develop different techniques for either lowering any one of those options. So you might say, okay, this thing has a pretty likely high occurrence rating. And the severity is pretty bad. What can we do to mitigate the Severens number? Or what can we do to better detect that this incident or situation has occurred? So through that you end up creating a series of either work items for development to, you know, come up with a more robust, or it may be monitoring to be able to lower the detection number to a point where the risk is lower, right?

Jeffrey D. Smith 00:47:36 Because, all right, we’ve detected this before a customer has done anything and we can do that. So we’ve used that to drive a lot of monitoring, right? Because a lot of times there are handoffs between systems or data transfers, and you don’t think about it when you’re designing it or building it. But now that you recognize that this is an important interaction between these systems, you say, okay, we need to track this somehow. Right? So if I send a message, I should receive an acknowledgement or I should, I should detect the output of that message that I sent so that I know it was actually completed. So we’ve used that in a number of organizations and it really helps drive the creative juices for people to think like, oh yeah, we want to make sure that this step actually completed because it’s huge. If it doesn’t, again, going back to what we said originally, you’re not going to be able to catalog everything. But now that you have a risk priority number, you at least understand what are the top five things you should address. And then you can constantly keep going that back to that backlog, right? Let’s go back to the backlog. What else can we, you know, solve or lower the detection number on through monitoring and things of that nature. You also end up getting a really robust understanding of the system because every time I’ve done this people that think they’re experts and the opening their laptop, and looking through code to understand exactly what’s happening.

Robert Bulmen 00:48:49 Jeff, we have a bit of time left. I want to delve into one more of these anti-patterns, it’s called the empty toolbox. And I’m going to describe that as a tendency for organizations to under-invest in tooling. Is that a fair description? Yep. Very fair. What drives that?

Jeffrey D. Smith 00:49:10 Sometimes it’s driven by a clear lack of ownership in terms of who should be managing, creating that tooling. Right? So every company I’ve been at has always talked about, we should create an internal tools team so that we have all of these things that we need. I think some of it has been this over-reliance on open source for lack of a better term, and I’m going to get some smoke here. I’m sure. But, you know, we tend to say like, well, if there’s no free open source solution, then it’s not worth paying for some things are worth paying for right. Even open source. So, you know, you get into this weird thing where it’s like, you know, we’re not going to invest the time and energy to build these tools. And because there’s nothing free on the marketplace, we’re not going to buy anything. So we live in this sort of quagmire that really just sort of expresses itself predominantly through, you know, wasted, engineered time, wasted developer time.

Jeffrey D. Smith 00:50:02 But if organizations aren’t sophisticated in measuring and monitoring these soft costs, you know, the pain points sort of become invisible. So I’m trying to think of a recent example that I can give you, okay. Something we just did last week. So we have a process that creates dynamically creates an IAM user for an S3 bucket for customers, right in testing. What would happen is there would be some weird confluence of scenarios that would result in an environment creating, but not destroying that record, that user. So then when they needed to retest the process with the same environment, it would suddenly blow up. That would then generate a ticket to my team because my team was the only team that had the access to go through and delete that user. So now the QA engineer is stuck. She has to create a ticket. She’s got to wait for my team to actually get the tickets, acknowledge it, do the work.

Jeffrey D. Smith 00:51:00 The work is probably 90 seconds, maybe 60, right? But meanwhile, they’ve been blocked for, you know, five or six hours because queue time is clearly the largest portion of the process. It’s not the actual work it’s waiting around for someone to actually get to the work. So because we have such a focus on, you know, internal tooling and internal empowerment, we said, let’s just write a script that people can do this themselves. Right. Let’s offload this and, you know, allow users to do that. Oh, but what if they delete a user in production? Okay. We’ll restrict it specifically to the pre-production account. Right. So if they delete a erroneously deleted user in pre-prod, nobody really cares, right? So here, here’s your command. You know, now this six hour window of dead time is literally five minutes. They don’t even need to create a ticket. You know, they just do what they need to do.

Jeffrey D. Smith 00:51:52 And then the nice thing is with that automation, we can then hook that into other processes in the future, right? So maybe one day we do want a JIRA ticket. We don’t need to have the QA engineer to create it. We can just have the automation create it right before it actually does its task. So that’s the sort of toolbox that needs to get filled up. Another great example is we were noticing that we would get these requests to run ad hoc scripts. And because we were the only ones with SSH access in production, it would come in through a ticket. You know, someone would have to approve it. And the developer side, and then an engineer from my team would have to log in and run this script. And we said, this is dumb. We’re adding zero value to this. Why don’t we invest time and turn this into a JIRA workflow.

Jeffrey D. Smith 00:52:30 So now people can attach a ticket to JIRA. They can create a JIRA ticket. They can create a PR. Someone has to approve the PR. Someone has to approve the JIRA ticket. And then the automation downloads the script and executes it for them. And now they’re able to move through these things so much faster than they were before, where it might take a day, a day and a half, two days for my team to actually free up to run this. You know, now they can do it all within 15 minutes and it’s all within their team. And everyone that’s part of the process is adding value, right? The engineers that are reviewing it are adding value. The person that submitted the ticket is adding value. You don’t have ops just sitting around, you know, just typing like, oh, it goes, so we’re going to have to run this. Cause engineers said so, um, and I don’t have the expertise to, you know, comment on your script, on the environment that you built. So, you know, w w why is so much energy and focus being put on my approval when I’m not really adding that much value to the process, we

Robert Bulmen 00:53:25 Start out with the premise that organizations may under-invest in tooling print out. Certainly there is a cost, especially if you buy tools, but creating your own tools is costly as well. And I want to focus a bit on costs of non automation. You need to understand cost of non automation to decide if you’re under investing in tooling. You did mention queue time and time to do the actual work. Are there other costs of non automation that should be taken into account and trying to assess, are you under investing? I think

Jeffrey D. Smith 00:54:03 That’s probably the biggest thing is the ability to do an action consistently and follow a complete process. And, you know, we always sort of assign that specifically to the technical task at hand, but it could be all of the supporting processes around it too. Right. So like, let’s say, for example, we say, we always want a JIRA ticket. That’s been approved before we run command X. If we do that manually, you know, Ken might message me and say like, Hey, it’s Saturday. I really need to get this out. Can you do this for me? And I’ll follow up with the ticket on Monday. Yeah, sure. Ken, no problem. But then the ticket doesn’t come Monday and we forget about it, then there’s an audit and someone says, well, where’s the JIRA ticket associated with this? Cause your process says you do this. Oops. Right. And I think that happens a lot in organizations.

Jeffrey D. Smith 00:54:47 Right. Because you know, as much as we all recognize the need for tickets and things like that, we also all sort of have an adversity to bureaucracy. Now, if you automate that, guess what? There’s no way around that because the automation is going to do it for you. Right. So you’re always going to have that consistency that also goes down to actually executing the command. Right. People feel safe. They’re like, oh, well, before we do this command and production, we always run it in staging first. It’s like, oh, okay. And there’s a human typing that in staging. Right. And are you sure that they type the exact same thing that they were going to type in production while they copy and paste it? Are you sure they highlighted the entire line that they accidentally miss a line break and suddenly, you know, the dash dash safety flag is missing.

Jeffrey D. Smith 00:55:30 These things happen all the time. So I think, you know, thinking about repeatability and the risk of something not being repeated exactly as is, needs to weigh in and factor in as well. Now on the other side of that, you also have the upkeep and the maintenance of automation, right? Because as things change as processes change, any software that you ever write is probably gonna have a bug or two in it. So you need to think about that cost as well. One of the things we talk about is how frequently are you actually doing this task? If it’s something that you’re doing like once a year, automating, it may not be the best solution only because who knows what the environment looks like a year from now, right? It could be completely different. And the code makes all these assumptions that were valid a year ago, but are now suddenly invalid and now becomes this huge quagmire because you know, the script started and failed halfway through or something like that. So there are definitely, you know, things that you need to sort of weigh when you’re looking at automating costs, repeatability, frequency, I think are some of the big ones.

Robert Bulmen 00:56:33 Sure. That’s a frequency issue. You could turn that around and say, there is something that has value to do, but no one ever does it because it’s such a big pain. And if it were automated, you might do it more often and get some value out of that.

Jeffrey D. Smith 00:56:49 Right. And you know, for that, I would say the scripts, the automation is less about the frequency at which you do it today. Just the frequency that it would need to be executed regularly. So like I remember working at a shop where we had talked about doing scripting completely automated database fail overs. Right. And then when we finally got around to doing it, it was probably a year and a half later when we actually needed it. No one had confidence in the script, right. No one felt confident like, oh, let’s just run this and it’ll fail over. Granted the same people that wrote it were in the room, but everyone’s like, no, we let’s do this by hand. So that’s more of the frequency thing that I was referring to. I completely agree. Like once you have something automated, the frequency at which we’ll do things increases sometimes for the worse, right.

Jeffrey D. Smith 00:57:35 Because what we’ve seen with like our automated scripts execution framework is that now instead of fixing some long-term issues, we just have one-off scripts to correct the occurrence. Right. And it’s like, well, this is the same problem that keeps happening. And because we’ve made it so easy just to run this script that everyone knows about to correct the situation we opt for that as opposed. Well, how do we prevent the situation from happening period? So in that scenario, the friction of the manual process would actually be a benefit because it then says, you know what, it’s a pain in the butt. Every time we’ve got to do this, we just need to solve this problem. But now it’s like, oh yeah. You know, because the other thing is the fix isn’t concentrated in one person, right? So if you’ve got a group of 40 developers and each developer experiences at once a month, right? The chances of someone saying this is worth tackling in a bigger scenario is deeply lessened.

Robert Bulmen 00:58:33 Jeff, we’re getting close to end of time. Are there any key takeaways we haven’t covered that you want to pass on to our listeners?

Jeffrey D. Smith 00:58:40 I think the key takeaway is really just, you know, focus on your people, right? Focus on the communication between teams. You know, when people talk about dev ops, it’s very easy to get wrapped up in tools and all the fancy whizzbang stuff, but focus on your people, focus on your processes. And I think you’re going to get huge dividends just by enhancing people’s ability to communicate and get their jobs done.

Robert Bulmen 00:59:02 Great. And Jeff, where can people find your book? You can find

Jeffrey D. Smith 00:59:05 It on Manning, manning.com. Just look for operational anti-patterns dev ops solutions. My name is Jeff Smith. So that’ll help your search as well. It’s also available on Amazon. Several ebook formats are available on the Manning site. Unfortunately, the Kindle version is not on the Amazon site. So if you want the Kindle version, you’ve got to go to manning.com.

Robert Bulmen 00:59:25 Are there any other locations on the internet you’d like to point people to who want to find you or reach you?

Jeffrey D. Smith 00:59:32 You can find me on Twitter, I’m at dark and nerdy. And then in the process of launching another site, attainable, devops.com.

Robert Bulmen 00:59:38 Great. Jeff, thank you so much for speaking software engineering radio. Thank you. Been a

Jeffrey D. Smith 00:59:43 Big fan of the show and uh, happy to hop on

Robert Bulmen 00:59:45 For software engineering radio. This is Robert bloomin. Thank you for listening.

Outro 00:59:50 Thanks for listening to se radio an educational program brought to you by either police software magazine or more about the podcast, including other episodes, visit our [email protected]. To provide feedback. You can comment on each episode on the website or reach us on LinkedIn, Facebook, Twitter, or through our slack [email protected]. You can also email [email protected], this and all other episodes of se radio is licensed under creative commons license 2.5. Thanks for listening.
[End of Audio]


SE Radio theme: “Broken Reality” by Kevin MacLeod (incompetech.com — Licensed under Creative Commons: By Attribution 3.0)

Join the discussion

More from this show