Episode 427: Sven Schleier and Jeroen Willemsen on Mobile Application Security

Filed in Episodes by on September 24, 2020 0 Comments

Sven Schleier and Jeroen Willemsen from the OWASP Mobile Application Security Verification Standard (MASVS) and Testing Guide (MSTG) project discuss mobile application security and how the verification standard and testing guide can be used to improve your app’s security.  Host Justin Beyer spoke with Schleier and Willemsen on webviews, certificate pinning, anti-reverse engineering technology, and the OWASP verification standard and testing guide. Specifically, they discussed how you should approach implementing any of the controls enumerated in the verification standard based on your threat model. They also discussed when you should implement certificate pinning and how you should approach implementing web views in your mobile applications. To close out the show, they discussed the Hybrid App guidance in the mobile security testing guide and the need for community contribution to it. They also discussed how the MASVS, MSTG, and Hacking Playground all can fit together to increase developer security knowledge as well as increase the security of your mobile applications to appropriate levels.

Related Links

SE Radio theme music: “Broken Reality” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0

 View Transcript

Transcript brought to you by IEEE Software

Justin Beyer 00:00:21 Hello, this is Justin Beyer for software engineering radio. And today I’m speaking with Jeroen Willemsen and Sven Schleier. Jeroen is a principal security architect with a passion for mobile security and risk management. He has supported companies as a security coach and architect, a security engineer, and as a full stack developer, Sven is an application security expert with hands-on experience in web and mobile penetration testing. And as a principal application security consultant living in Singapore, he is also the creator of the iOS mobile hacking playground, both Sven and Jeroen are project leads for the mobile security testing guide and mobile application security verification standard. So just to start off the episode, I want to talk a little bit about mobile application development and then work into the security discussion. So back in episode 300, we discussed, um, mobile app development with Jonathan Stark and I’d refer listeners there for more in depth discussion, but just to start off what kind of mobile applications can be developed.

Justin Beyer 00:01:18 I know that there’s native apps and web apps and hybrid apps and progressive web apps sometimes called PWS. What are those and how do they differ from a security perspective from one another?

Sven Schleier 00:01:30 So if I can have a little chip on that one, if you look at mobile apps and I don’t think it will have to go very deep, detailed, but it’s nice to have a small trip down memory lane, I guess, any very old days, uh, we didn’t do mobile native apps at all. Of course, we just had a website that was loaded into a browser on a device, and we tried to tailor that towards a mobile perspective. So you basically have all the oddities of a browser stack or they should normally have for you’re actually visiting a mobile website. So that’s just where we started off. Then basically, um, the whole jump into native apps came, but they became too complicated. So the native app, you basically don’t have their browser stack anymore, but you’re trying to redirect into their insight with the operating system. And then the hybrid app game where we said, okay, let’s try to use for instance, the web view and render as much as we can over there. So we still have the same mobile website feeling, but at the other end, let’s try to use those native things as well. Um, for instance, try to work with the fingerprint scan scanning a later point in time. But at first of course, like stuff like GPS to know where you are and all those type of feature has combined them. We had a very long time a promise off the performance that it will be better than a mobile website, and it would be easier than a native app. But what we ended up with for a lot of years was basically you a very short application that had older vulnerabilities for that few and older vulnerabilities of rolling implementations of those plugins. So security wise, we still were in dare, but at the other end, otherwise she had to run to the complicated native app. And of course you have progressive web apps where you have web service workers downloading the concept of priori and making sure it would still work. And then you go back to how well is your browser stack defined? How well did you secure it over there? And you start over that party again. So overall, if we have to rate that, I think still, depending on how well you wheelchair tools, I think a native app can be the most secure, but that does mean you have to stick by the book and you’d end up with two different code basis or a framework that is Stater towards having native compilation. So you start with something like separate. Yeah. So you start with Samarin for instance.

Sven Schleier 00:03:28 And all they do is compile to work that the native app SDKs, you don’t do anything else. And in the end to do it, they need to do some tailoring from time to time, depending on the version and how well it was, uh, developed over time. But at least you get all the security features you need. But then again, when it comes to adding enter reverse engineering and other tooling, you again, have to jump into the native code base. So overall native is indeed easier to secure, I would say, but it comes through for price. You as a developer, have to work pretty much harder to get all the extra out of it. And it’s really the question is you need that. I mean, overall, if you have a marketing app, you want to get it out yesterday, but if you want to secure your banking, it that’s a bit of a different game altogether.

Justin Beyer 4:00 Yeah. And that’s definitely something we’ll dive into a little bit more when we discuss, you know, threat modeling and picking where you actually need to be on the, you know, security spectrum of things. But that’s definitely an interesting way to put it. So essentially when you look at someone like Xamarin, you know, you might get some of those higher level security benefits with some of the ease of use of having one single code base that I can compile across the whole, you know, landscape of devices that Microsoft wants to support for it. Whereas a native app, you know, I might have five different development teams because I need to develop an app for iOS and app for Android. And then, you know, the 500 other kinds of Android that you know, are running on T-Mobile versus, you know, Verizon versus whomever’s a custom version of their kernel. So essentially what you’re saying is, you know, when I’m running a high security app or something where I need to implement things like any reverse engineering or more secure network protocols and have that lower level hook in, I need to look at a native app versus these other kinds of apps.

Sven Schleier 00:05:13 Yes, definitely. Definitely. So if it comes to the higher level stuff like securing the network, we got a browser stack that’s, uh, nowadays pretty much okay. If you got support for two different transparency and other stuff to do a lot of basic fainting, but the moment you are afraid that specific attacks might happen, where a lot of additional heterosis stripped off, for instance, your day, you have to up your game, but it’s really, again, your trip model. If you think your users are only at home and they have a separate VPN where the communications happen across or something, it didn’t have the network stack is not really that important at all. But if you look at the current state of things like with COVID-19, many people have to stay home anyway. So that’s a whole different network security game altogether. But if you now look for instance that before

Jeroen Willemsen 00:05:58 COVID-19 where people in different conditions used to hang out at Bubs and you know, Oh, I forgot that your friend is standing, that he did this and this, and now you still hope you had to pay that. So you now have to go online and quickly because your mobile banking told you it was this easy. So you want to go there. Then all of a sudden you’re on the insecure shared wifi. Cause you forgot that you should have turned on Fiji instead off your wifi. And then we get up, we have an old, different security frame in terms of the network stack that you’re running on. And then all of a sudden pinning and all the other jazz becomes important. Okay.

Justin Beyer 00:06:30 So again, it’s a mix of both, you know, from my app, what am I doing in my app and how secure do I need to make it for my users by default, without forcing my user to think about it and go, Oh, I was on the shared wifi, hold on, let me get on the LTE. Cause that might be a little bit more secure or at least in my head it might be a little more secure. Exactly. So moving a little bit off of just the discussion of mobile apps and what kinds of there are, you know, just to make an easy comparison. Most developers, I would argue are doing something with web apps of some kind today. And most developer education is focused towards, you know, building the next SAS app and all those things. So how would you say that a web application differs from, you know, mobile applications on a security perspective? You know, what are the different areas of concern that I’m going to have for a mobile app versus,

Jeroen Willemsen 00:07:17 Uh, so just, just a few things. I mean, during a web application tests that you would do from, uh, from an attacker’s point of view. So the major things that are really different is the amount of data that is actually stored in a mobile app. So when you look at web apps, of course, there’s also a lot of data store, but it’s just getting more and more in terms of mobile apps that is being stored. I mean, we have heaps of gigabytes nowadays available, and the mobile apps make also extensive use of that. Meaning a lot of data’s actually there and it’s of course also very easy in order to store this information securely with iOS and Android, if you make a native app, but still does a lot more information that is actually stored in mobile apps. The other thing that you rune was already mentioning is then the reverse engineering factor also.

Jeroen Willemsen 00:08:01 So especially a lot of, um, apps, if you think now about OTP apps also about, um, banking apps, especially gaming apps, they have a huge attack vector of reverse engineering, because a lot of the logic is also within the game, also with an OTP apps. And if you will find then the actual algorithm that you, that you can break that up, that you need to break, and that, um, allows you then to reverse engineer the extra process. And for example, generate your own OTPs. If we stick with the OTP example, or you might be able to cheat in games, then you will have a big attack vector. And that might actually really have an impact of the companies, especially in games. So reverse engineering is definitely one of the big impacts that you have for mobile apps compared to, um, compared to web apps.

Justin Beyer 00:08:48 Yeah. And I’m assuming that, you know, unlike games on your computer, you can’t have an anti-China engine that’s running from, you know, boot. If we look at, you know, some of the newer anti-China engines, whereas, you know, on a mobile device, we’re actually talking about, we’re going to have to implement something to obfuscate the code in such a way that, you know, I’m going to prevent against reverse engineering, but just changing directions a little bit local storage is definitely something that’s unique to mobile versus web. So how do you protect that, you know, in your app against someone losing or getting their device stolen, you know, are you a hundred percent reliant on the actual underlying operating system implementing some kind of protection via Panner passcode? Or is there something you can do within your app to actually protect the data that you’re storing in there? Like, let’s say you’re storing certificates for some reason or another, and your local storage for your app. How would I protect those kinds of secrets within it?

Sven Schleier 00:09:40 There’s an mcg. We have a nice comparison in terms of security, which you can get basically. And what we always start off with is if you’re not afraid of, um, data being stolen, in that sense, you can start off at least as close to the secure world as possible. So start with using stuff in a Dicky chain, uh, or have key in the, in Android key store with a strong books encryption. So start as strong as possible, but the moment you, you might be afraid of something going wrong, then it’s better to actually move that control affidavit away from, uh, the mobile device, but move it towards the server. So if the app you authenticate towards the server, you get your decryption key, garbage skin started to drift to local content. And about with your device is stolen. You just block your app instance for instance, and then you’re pretty sure nobody can access the data anymore.

Sven Schleier 00:10:29 And then the more you go towards having keys or data stored, uh, something like share preferences, sort of secured part of the end of it, storage or to a lesser more secured file system of iOS. The more you start opening yourself up to a lot of different attack factors, especially if there’s no interaction with the server necessary anymore, because that, it means the moment the device got stolen and the passcode might be easy to guess, or to device obtained while being unlocked for instance, and kept unlocked. Um, they got a whole different thing, but as you can already hear, while I’m telling this, we all know full sentence started introducing different threat scenarios. So the moment you think like, okay, I’m only worried about it, look, device being stolen, it’s fine with just moving skis into the secure world. And, you know, having your boss go to protect that, but it also immediately minimize sets a limit or a bar for your minimum iOS and Android version.

Sven Schleier 00:11:18 Because prior to dos, we didn’t have hardware key. So you can serve as those anymore. At the other end, if you’re afraid of a device being stolen or robbed, because there’s really a higher risk storage, your app, you then you’d need something else, but then you’re always too late anyway. Um, but let’s say you want to have a more of a hybrid model because you want to allow users to have their devices and looks for hours, which is not really what we recommended to first base in terms of five, as you, of course, we didn’t know people, people are people and they want ease of access. So they want the simplest guessable POSCO anyway, cause I’m always forgetting my balls go and it’s so hard and delighted. My face is so heavy when I have to do facial identification. So let’s do this differently. You know, those type of things, then we have to start interacting with the server at least.

Sven Schleier 00:12:04 So that at least at some point in time, when they’re easy to look, device gets stolen, of which a lot of your users will have guests. You’re not creating a complicated app for a complicated device user. You’re creating an app for the masses. So you want to make sure you can also serve as that, that boy or girl or women or, or men that doesn’t want to go all the way into securing stuff. He just wants it to work. Cause he never figured security. And why should he, because his life has been secure enough till now until he’s device gets stolen. And then if that’s your threat model, then you should start thinking about, let’s make sure you get the server in there and it hadn’t shaked it, that you get the key from the server. And yes, then the user has to authenticate. Yes, that’s a hassle, but apparently you have something so much of high risk data on the device right now it’s worth it. But if you combine these things the wrong way around, it gets hairy. So it really starts with actually having a proper track model. And then the line starts, you know, similar like reverse engineering. You can start pick what you want there.

Justin Beyer 00:13:02 We had done an episode with Adam show stack discussing, uh, threat modeling. I don’t think it’s published yet as of the recording of this episode, but in discussing that, it almost makes me think of, you know, in some cases, an app for a general user, you may not necessarily be worried about the data that’s stored on there. Let’s say an email application, but once I go to a corporate environment, now I’m starting to worry about what kind of data sitting in it. And maybe this is less of a, you know, from a vendor perspective saying, yes, our app has a hundred percent secure all the time. No matter the user, no matter the use case will protect your data, whether it’s, you know, cat pictures or your, you know, million billion dollar, you know, intellectual property. And instead saying, look, we’ll do our best, but if you want to implement these things, you need to look at something like a mobile device management solution where you can enforce these kinds of controls, like face ID or, you know, having, you know, 15 digit pin codes that can’t be one, two, three, four, five, six, seven, eight, nine, 10, you know, and circling back again, you know, that the enforcement necessarily doesn’t live within the app, but as, instead on the actual operating system side, almost like the discussion.

Justin Beyer 00:14:13 And I guess this would be an interesting comparison to here. You know, would you say that it’s almost closer to how we used to worry about application development security in that traditional, you know, client server architecture. When we used to stick a hard client on someone’s desktop and say, you know, run this application and it’ll connect to our server in the backend and then it’ll pop up on your machine versus the current model of yet just open your web browser and go to the webpage. It’s available everywhere. Here you go.

Sven Schleier 00:14:39 I think there’s quite some similarities to it indeed, because in the end you end up with, uh, hopefully that client being secure enough to do something, whereas the web makes it more easier. But at the other end, if you back to the old client, that client had a much more capabilities, which we don’t have with the web in its current shape, of course the web is changing all the time. So we might end up with no need for a mobile native apps eventually, but we don’t see that yet. So in that sense, there is indeed some of those things going on in there. But the difference is when we look between thin client and doing web first, so go from client server technology to a, more of a different, a software as a service model, you can see that we already tried launching away from native apps to SAS a bunch of times.

Sven Schleier 00:15:28 And every time we start talking back, so that’s a bit of a different journey. Whereas with the, you know, Sam’s journey, we go, well, let’s say up, I don’t know, left or right, but you know, more towards the server again. And at the native, we had Cordova, we particularly over exist for quite awhile. We got quite a lot of different hybrid technologies as well, but for every one of those, you can see that they’re used by some, but eventually we ended up with again, more native apps. And now we have a lot more different frameworks that are being used, which have create results. I mean, if you look at some of those hybrids, Epic analogies nowadays, they’re pretty fast and it’s looking beautiful. But again, we see also new native apps popping up next to them. So it’s not like it’s a journey that went off towards SAS.

Sven Schleier 00:16:14 Like we’re used to at the other end to be fairly honest. If you look at manage workspaces, you see that the thin client is coming back again, but then purely in the cloud. So I don’t think we really went away from the thin client model as well in dope amount that we would like to believe ourselves, but from a consumer perspective, yes we did. And we love it. I mean, check it out. Chrome Firefox Safari will bring you anywhere. But overall for many different company environments, we end up using the same Chrome to connect to a remote desktop web interface. And we start over with a thin client just with a web interface on top of it. But we did the same thing again, it’s a journey.

Justin Beyer 00:16:55 Yeah. I always like to think of the, uh, you know, Microsoft remote app, you know, I stick a remote app proxy in front of my own prem app, but now it’s a SAS app sorta kind of ish maybe. Yes. So changing directions a little bit here and you know, kind of a silly question here, you know, we’ve mentioned it and kind of covered it, but I just want to make it really clear when my app is interfacing with the API. I still need to worry about all those API security issues and you know, all those authentication issues and the session management, or, you know, cross site scripting issues or, you know, local file, you know, inclusions, that’s all still a problem. Even if my mobile app is a hundred percent secure, right?

Sven Schleier 00:17:35 I mean, all of the different things that you were just mentioning, um, like cross out scripting, for example, this is only an issue of if you have a web view. So for all the different things that was already mentioning, like for example, summary and or Cordova whatever’s view based then things like cross site scripting is definitely something you need to consider. And B you would need to take care of is when your mobile app development, it’s not actually about the authentication and also authorization all of these things. This is something that we do not really cover under mobile security testing. I mean, we touch it definitely. And we explain it also, but it’s something that is more on the solar side and on the client side, it’s very similar then to two web apps, I would say, I mean, on top of that, you have the benefit of using things like local authentication.

Sven Schleier 00:18:20 We were already talking about like, where you have this additional step, like biometric authentication, but I’m in that sense, it’s shifting a little bit, especially with things like cross site scripting. Also, there’s always an added nastiness towards it. Just a moment that you start uploading photos to an API from your native app, who’s guaranteeing that a native episode into your API and a moment you start believing as a developer that, Oh, it’s only the mobile app and nobody knows about this. So apparently you did enter a virtual engineering to guide it, you know, get that away. And all of a sudden the JPEGs starts to have very funny binary code in there. We’re getting a bit into trouble now, aren’t we? So in that sense, remote file inclusion still lives. I mean, you can’t trust what a mobile app is uploading, cause you don’t know what the mobile app is. Indeed like a was already mentioning. That’s not ready. Mcg is designed for, of course we have a few controls in the MSVs and also quickly describe the mcg to remind you of the fact that you have to implement your security controls at the server, which automatically mean to deep kind of filtering you to apply over there as well. Uh, but we already have many funny cases at different customers where people were so beautifully focused at the mobile app that they forgot that other agents can exit their API as well. Unfortunately,

Justin Beyer 00:19:34 So having my slash mobile endpoint, isn’t a, a safe security mechanism to prevent, you know, these security issues. And as you’re mentioning, you know, this isn’t something necessarily in scope for the mobile application security testing guide and you know, that kind of stuff. And maybe this would be a more appropriate, you know, looking at an, the OSS Brown, the web security testing guide to actually review those APIs and, you know, verify our authentication, all those kinds of security things, and then be able to say, okay, our API secure now let’s make it work with our mobile app. Yep, exactly. Awesome. So now just diving in a little bit here, what would you say would be common areas that developers, when they’re building their, you know, mobile apps should be focusing on from a security perspective?

Sven Schleier 00:20:20 So where did you really focus on is starting always out with the threat model because the moment you start enumerating controls, it’s endless, of course, at the mobile application security verification standard, we have a level one or level two, and always start with level one with the hygiene, uh, controls you require. Um, a great example is for instance, if you look at network and storage, where we start with the basics, so don’t at the storage level before we start talking about encrypting your storage and stuff, let’s start with, okay. Only store stuff that you really need to store and leave other stuff at the server. If we look at network, don’t start beating immediately. Let’s just first try to have a decent certificate chain in place. Let’s just first try to assure you using decent protocols. And then we move ahead. So start with the basic hygiene and Dem ASVs can clearly guide you by the means of level what you should do first.

Sven Schleier 00:21:12 And of course, if you have a proper track model, you can always say no to certain level one controls because you believe that the risk it’s covering is so little that you’re okay with that. The funny, but sad and consultancy answered is it depends. Unfortunately it really, it starts with threat model first, then take the hygiene controls that you’re really sure off. And then it won’t, you debate it in your model. And if you still have a lot of risks left that you really need to cover, start jumping into the higher end controls. So really starts with basic impotent at the station don’t store stuff. You don’t need to store get your network security controls in place, make sure that you did your authentication properly, wherever necessary. And if you thought about the authorization, what altogether, uh, that’s make sure you have something like architecture documentation.

Sven Schleier 00:21:57 And that sounds a bit weird. And I know as I did a lot of development work, we don’t really like writing documentation that much, to be honest, I mean, many of us don’t, but if you don’t have a common understanding of what you’re doing, it doesn’t matter what you do because in a year time when the next development team will have the problem, not me anymore and day start, you know, breaking controls because they had a different understanding of things they get worse. So let’s make sure you first gets common understanding. So it’s not only about the mobile control she implemented, but it’s also about the process around development. Let’s make sure we first do our proper sulfur engineers and take a worksheet. Arius take collaborating serious, take documenting serious. So we know what we’re doing. Nobody asks you to write a Bible, not at all. Just make sure that we understand why this code is there. And not that the code speaks for itself magically. You can always hear my folks when they read the code. No, that’s not how that works. Let’s make sure we first understand why that code had to be written in the first place. And once you got to that level, you’re, it will, it will vary massively in terms of what you have to do next. It really depends on what you’re building and why

Justin Beyer 00:23:46 And just to dive in, I know you had mentioned, um, you know, certificate pinning. I know Sven mentioned earlier web use. I just want to dive into a couple of these things just to make it clear to our listeners, you know, what these things are. So would you be able to give me a quick summary of what a web view really is and how you would actually go about securing that web view?

Jeroen Willemsen 00:24:03 Let me maybe please answer that one. So I’m a web view is just a very simplified browser and that sends in a mobile app. So for example, you will not have your UA Elba and other things like that. So it’s very simplistic. And so you can see this a lot in those hybrid apps also that we were talking about earlier and in order to actually secure them properly, there are a few different flags that you can make part of a web view when you’re defining or when you’re defining your web view and your code. One of the first things is of course, disabling Java script, which many times of course will not be possible because many times you do want to use Java script, but at few other different things that you can do in order to harden, maybe a web view. So for example, the front kind of URL handlers that you want to block, like access to the file system, for example, that is part of the sandbox.

Jeroen Willemsen 00:24:50 So there’s still different kinds of attacks where you have, um, file inclusion also within the mobile app. So this was demonstrated also while back. So you need to, um, harden your web view as much as possible, which is, as I was saying, JavaScript, DOL schema that you should be blocking that you do not want. And, um, also for example, on Android, there’s something called add JavaScript interface. And this is where it can become a bit dangerous because this is where you’re bridging more or less your code, your Java, or Kotlin code an Android with the interface to your web view. And if no, somebody is actually men in the middle or has maybe some stored, XSS already placed on the service side, then this JavaScript piece will actually be able to execute a code that is available via the edge JavaScript interface. And if this is something sensitive, then it can become quite dangerous. So this is where you would need to understand what you’re actually exposing the JavaScript interface in your web view.

Justin Beyer 00:25:47 Fantastic. Thank you for explaining the web views a little bit. So essentially it’s just a trimmed down browser sandbox that has a lot of its own issues on top of the built in browser, but are those the same kinds of issues? So like for example, you know, if I had a vulnerability in Safari on iOS, the web views would still be impacted by that same kind of vulnerability.

Jeroen Willemsen 00:26:10 Oh, for whipped feuds. It’s um, it’s a bit of a complicated thing because the web view you’re working with might not be there with you. You think you’re working with in general, the engine you’re using your browser might be is different brand or platform is different, quite a bit for iOS platform for the main browser like Chrome and, uh, the embedded into the application back to if your Android application with your enter platform or the version of Safari. And then the web view you were offered an earlier version of Android was again, quite a different thing. So they came up with a beautiful idea and I think that’s actually quite brilliant to have

Sven Schleier 00:26:44 A installable with fuel, like to grow with units. You can upgrade through the interplay through the Google play store, which helps a lot with having a more secure web view. But then again, you should wonder, and maybe okay, if you have to wonder disk, let me first make that note, do you might go into the wrong direction at this point in time, but the moment that you’re starting running on that web view, you don’t know which version is dare she, we don’t know what was filmed. There would be the teacher working with in general, of course you can check for diversion, but it will be kind of sad that if you open up your app, you’re asking your customer to first upgrade your Chrome. If you via Google play store before you can continue. I mean, I don’t think that’s really the way to do this, but overall you just are not sure what you’re going to have there.

Sven Schleier 00:27:23 And of course, many of those Rauscher engines have a lot of S uh, commodities in terms of how stuff is implemented, which will indeed affect both the web view as well as the browser. So you’re ending up with both things at the same time. There are differences. Of course, if you open up the browser stack and you’re vulnerable at the browser level for a certain audit execution, both, all you get for free is what you normally would also have your normal browser. So that’s how you would handle a mobile website. And there’s no a lot of added complications in there, but the moment that you’re having that mobile app and you’ll figure it, okay, we found ways to secure our data. So let’s open up that web to you now, all of a sudden at sandbox, if that secure data has opened up to where it’s at same refuge. So now we increased the risk. So there’s two, those three things basically come today.

Justin Beyer 00:28:06 Yeah. Okay. So essentially, you know what views are going to increase your risk, but it’s not exactly the same as the browser that’s running on the device. And it can be all kinds of different depending on which operating system you’re running on on the mobile device,

Sven Schleier 00:28:18 Just trim it down like it’s fan tilt. And that will really help

Justin Beyer 00:28:21 Exactly only use what you need and enable what you need rather than enabling everything and then turning things off selectively later. And then you had mentioned certificate pinning and not necessarily worrying about that out right at the start. When would you want to start to worry about that? What exactly is that going

Sven Schleier 00:28:37 Medicaid? So I can give a very lengthy entering that tire, your, uh, your listeners a lot. I gave her presentation about it. That’s been or not to pin, which was focused on iOS. And one also focus on both platforms. I think it’s great to include the links, but there’s three things that you’d always come back. The first thing if it comes to pinning is if you look at the it’s actionary should wonder about is okay, is my user in a network environment where this makes sense. So when we can be eavesdropped, so is public wifi. There are really not psychiatrists. Uh, any of the cell networks in the country that my user is at. For instance, you can easily imagine that some of the more government controlled countries might have a different network layout than those who are a bit different, basically like, and then still you might want there what the government can or cannot do.

Sven Schleier 00:29:26 But the moment you start worrying about those, you’re interesting, you’re developing a quite interesting app. I would say the thing you should more be worried about is what is my user doing if I have some sort of financial app and I want to security communications, and I know my users are agreed and we’ll start in fill those beautiful free apps. Did you consider the value as, as long as you click okay. On the next device profile per certificate, that you really need to install this app? Yeah. So for the developers, not knowing what’s happening right now, basically what many people do we try to do is skim people into installing apps for free, like clones of those, or not even close and even functional clothes. But if you get the idea it’s for free and you have to install a certificate with that, and that certificate can often be used to bypass any network communications, because it’s similar to the certificate offer to buy, to buy by the surfing it’s youth dropping basically, or yet the demand in the middle.

Sven Schleier 00:30:17 Um, at the other end in the Netherlands, we had this beautiful story of de notar and just Google it that’s because otherwise you can rent on, on Ts insecurities in general for ages. And I don’t think that’s the appropriate thing to do right now, but a nice thing is that we, that basically proved that the moment the CA gets compromised, attackers can hand out certificates for any domain. So the moment you’re really worried about your network in terms of what you’re communicating with your server. If you’re really worried about the protocols you’ve been using to authenticate, and the people might reuse that material for lesser. Good. Let’s say, that’s the moment you start thinking about pinning, but then there’s two other issues to think about. If you want to protect the keys that you’re using your pinning and use those keys for a longer period of time, are you actually able to secure the private key of that key pair?

Sven Schleier 00:31:06 Well enough at your server and as a team that you want to put all the extra effort into you reusing the key for a very long time at the other end saying, which is shift keys more often, and we have a different strategy. So we forget about bidding in general, because it’s hard to do key management or secrets, your management as well. You might have different face to fry. So it starts with, do I don’t trust the network or is the risk too high then yes, I can start bidding on my public key, but then still ask yourself, am I really able to protect a private key wet enough that it makes sense to bend towards the public?

Justin Beyer 00:31:39 Okay. So essentially it’s that, you know, risk versus reward benefit, you know, how much do I really trust the network? And as with everything in security, it eventually ties back to your threat model. Where are your trust boundaries? And, you know, when you talk about zero trust, we always say, you know, we don’t trust the network until we establish trust in the network and those other, you know, newer security approaches. But maybe from that sense of, you know, do I really need to worry about certificate pinning and add all this additional workload to the development team? You know, it really becomes that risk versus reward, or can I just, you know, rotate certificates every year or every six months, instead of worrying about this very longterm public key hash pin to verify that they’re using the correct

Sven Schleier 00:32:20 Certificate one connecting to my applicant. Exactly, of course we can ease the pain by making sure you got the public key set up and a bunch of backup keys as well. Just make sure you don’t store the private key at the same location as the other private gate that might’ve been compromised. And yes, we’ve seen customers do that. So that’s where you can start off. If so, like your, you as a mobile app developer, don’t have to worry, but just make sure you did communicate to your server guy, um, that he shouldn’t be deploying something else and the things you agreed on because then you basically kill the communication again. And more importantly, instead of thinking about is my network secure is thinking of, is my protocol secure and that’s of course, basically. So how long would a session last? So if I started dedicating do authenticate with something like secure remote password protocol where we don’t send the password at all, but we send something else over the wire that the server and the client have to calculate, blah, blah, blah, or do we do some sort of harder to replay mechanism where we sign off messages and so you can see stuff, but you can’t change them.

Sven Schleier 00:33:24 That could already help a lot. But of course, if you have your standardized app with a standard way of authenticating and you can easily replay those things, and you’re worried about that, then you should think about bending. If you know, your app is used a lot in environments where it’s easy to, to eavesdrop, basically, which does combined with what’s the version of the operating system you’re running on. For instance, iOS got, uh, ATS already in Derek quite a long time. So a lot of hygienists being done by the operating system already. And with newer versions of Android, it became much harder to use custom supply certificates where instead of the operating system once. So if you only have these guys and girls and whatever, with very modern devices and you based your development on that, then again, there’s less worry you need to have about that.

Sven Schleier 00:34:12 But the moment you want to support everybody, including those people were very old devices where each controls are in dare and then bending might not be such a bad idea in general, as long as you take care of the things you just talked about, maybe it’s just one thing to add from heroin is also the operational aspect. I mean, you have all the technical components that you need to implement, but you still need to be sure that the operation team that is maybe changing the certificate on your servers, actually talking to the development team, because I could see in the past a lot of clients that were quite happy to finally implement SSL pinning. And one year later, they actually had a denial of service against a user base because the certificate was updated without updating the certificate and the app. So this is also something that you really need to take care of and is quite crucial when, when you’re implementing SSL pinning. Okay.

Justin Beyer 00:34:56 So it’s not just the, you know, can I develop it and put it in the app, but can I actually operate this architecture and system to support this? You know, do I have the appropriate automation in place to ensure that if a certificate gets changed on our infrastructure, I can change it on all these mobile applications and not cause a huge denial of service. So by implementing these, you know, secure network communications, does that actually help with some of the, you know, conversations around spoofing cell towers on LTE with, you know, I know law enforcement uses the stingray device, but it’s been shown that you can actually do that just from, you know, your computer with some, you know, network cards. Is there any actual security benefit from that perspective by doing these types of like certificate pinning or is there something else you would have to do to protect it

Sven Schleier 00:35:43 That, so it starts with certificate bidding or actually public keeping. Cause like you already beautifully mentioned the data anxious. So your rudder being against the hash of that key in the field certificate. So we have less work with need and it can really help because the moment somebody starts, uh, jumping into the network and now it is a bit of, um, I’m the chair. Has there been a previous episode where TCIP was covered a bit before we start talking about spending trees and all that jazz,

Justin Beyer 00:36:07 I think we’ve done a little bit of network coverage. Um, but I don’t think we’ve delved too deep into the concepts of spanning trees and, you know, remote spanning tree and the different protocols in that sense.

Sven Schleier 00:36:17 Okay. So let’s keep it simple. The moment you were in a wifi, you’re sharing that with other step might be consuming dab that you’re developing or to, you know, as an attacker, you’re in the same space as the guy with the banking app, just keep it simple. Then that guy of the computer in the same wife and network and start saying to all the computers, Hey, the shortest path to the internet is at this address, come along, let’s celebrate and all the computers go like, and all the mobile devices go like, Hey, okay, that’s indeed a shorter amount of ups and that’s very fast. Let’s go there. So now for almost all of space where it start doing stuff like that, now we can start seeing the lower level of communications, which is awesome or awful, depending on which side of the tree you are.

Sven Schleier 00:36:57 Of course, I mean charity, but in the end where I didn’t bounce too, is what did you do at your level? So at the, um, the TLS level, the moment you have a 30 to keep bidding in place, and now it is computer starts offering certificates for, uh, different, uh, for the domain of your banking app, but it doesn’t fit with what you got in your app. They you’re secure. The only thing is, of course you did this banking and no longer works and I’ve been trying everything and it doesn’t work. And it’s just weird. So the worst case scenario is that you as a developer or a bank at provider, get a nasty review on Google play, like, Hey, come on, man. My app is not even working in here. So yeah, but that’s maybe not as bad as having to explain to bank why the money got lost.

Sven Schleier 00:37:41 So in that sense, it’s about, it’s an ongoing theme during this conversation, your Ted model. And next to that it’s about, okay, so how far do you want to go? So in that sense, it does secure against these types of attacks for quite a lot. Of course, the moment you have access to the device, binning doesn’t make sense. It doesn’t do anything. So the moment I can during this network conversation, install some sort of malware or offer something else in your device. That’s basically goes a level deeper than the current Epic communication bending doesn’t help at all, but that’s quite hard. So in that sense then having been in can help a lot. Exactly,

Justin Beyer 00:38:19 Please. So it’s like that conversation we have on the end point side, once you have administrative access or root access, it really doesn’t matter what the app does to secure it. It’s game over once they have the operating system. Okay.

Sven Schleier 00:38:29 Exactly. So just changing directions a little bit, how do you see the security differing between,

Justin Beyer 00:38:36 You know, the two major operating systems on mobile devices, iOS versus Android, you know, is there huge differences in areas like how data stored or how you would implement local authentication or how you would implement any reverse engineering or, you know, the secure network communications? I know you had mentioned a little bit about, you know, once you start getting into these more security focused things, now you’re bumping your version levels up and now you’re not supporting everyone, but just kind of focusing on, I guess we’ll say the latest versions of these things. You know, what do you see is, you know, the big security internals in these devices and how are they beneficial to you as a developer?

Jeroen Willemsen 00:39:14 I would say that they are in many areas, actually quite different, quite quite the same meaning for example, on iOS, you have your local authentication with different kinds of biometrics. You also have to same thing on Android with local authentication, through your Iris and through your finger, whatever as heroin was already saying, when it comes to data storage on Android, you have the key store, um, on iOS, you have to key chain. So there are many things that nowadays are very, very similar. You could say in terms of security does change all the time, but I would say the major difference, and this is where it really becomes inconsistent on the Android side is on iOS as a developer. I can just assume that there is a secure enclave. I can use the key chain and all the different hardware things actually available. And this is very, very different to Android because on Android, when you run, when you, when you develop an Android app, you obviously want to have a huge customer base, meaning you would actually need to go down quite a bit into the older Android versions that maybe have that maybe cannot support all the different features, or maybe don’t even have all the hardware features.

Jeroen Willemsen 00:40:15 There are a lot of cheap Android phones, for example, for example, that do not even have a, a tee or secure enclaves, something like this. So it’s not really a hardware pack key store. And in these areas, I would say it’s actually the big difference in terms of security, because some of the devices, um, have actually a lot of limitations already in the hardware simply because you have such a zoo of different vendors and hardware vendors.

Justin Beyer 00:40:38 Yeah. So essentially that single source of hardware from iOS does give it that ease of development. You know, you kind of know what to expect if you’re on XYZ iOS version, cause you know exactly what device they stopped supporting that version.

Jeroen Willemsen 00:40:50 Not exactly as an iOS, it’s just very, very easier to, to maintain because you know that these hardware features are there on there, therefore all hardware devices of all devices. So between iOS and Android, would you say that,

Justin Beyer 00:41:02 You know, one of them has better process isolation. So for example, I’m running, you know, I downloaded a malicious app and it’s running in the background and now I download your app and I start using it for, you know, making my million dollar banking transfer to my other account, you know, does one of them provide better security in that sense?

Jeroen Willemsen 00:41:22 So to answer that question, we have to go a bit that trip down memory lane, because it has different quite a lot. Although I first have to make a big disclaimer, I’m more of an application security specialist. If we look at the AOSP project

Sven Schleier 00:41:34 Behind Android, if we look at the amount of code that has been published til now any mud, it hasn’t been in terms of how iOS has been built up. There are so many things to say about that. And there’s all awesome presentations from specialists that are specialized in that field. Like you can do Desco and other people that are really dive into them into that area. But if we do get the application consuming parts, so what we can do as developers, cause I think that’s something we can answer. Then there’s been quite a journey for both Android and iOS. So for instance, Android intents, we used to start over with having everything open. If you just call this intent, you end up somewhere and then we ended up for instance, extra security control saying, okay, you can only use this, uh, interpersonal communication. If you are signed by the same key, the moment you got released into the app store.

Sven Schleier 00:42:21 So you can make sure that only certain ethical opponents can talk to each other. If they’re from the same vendor, for instance, at the other end, you got iOS where they started using things like entitlements to further secure the claims required to do certain interpreters communication, which helped a lot in securing that communication. But overall, of course, these are controls that are implemented on the application level and how well the boundary beyond that has been secured. So you really have to use the spouse that has been quite different and that’s harder to assess, of course, in terms of what’s better. What’s not better what we’ve seen so far. If you look at a text like the overlay of tech, for instance, in Android, where militias have pulled in, just make you think you’re communicating with it. But in the meantime, it’s just clicking through to the other application and then doing the actions it wants to, um, that’s been giving a lot of attention in Android for instance, which is indeed has been quite a problem for a while.

Sven Schleier 00:43:15 At the other end, if you look at the keyword uses, for instance, uh, we saw that on Android kissing keyboards quite soon. So every well like, Oh yeah, Kristen is bad idea. But then at some point in time, EPL opened it up as well. Cause Hey Chris, some keywords are awesome, cause you might want to have a different way to communicate. And then people realized we’re now in the same pile of trouble as we are an Android in that sense. And then when it comes to share clipboards, it’s get a bit, a bit harder because how do you want to, for instance, that’s your best work, uh, uh, application communicate with it. Application wants to authenticate that cause you need a way to let that communicate. And if all you have left is to clipboard yeah. Then both Beth firms have the same problems. So in that sense, if it comes to those share platforms, that’s, you know, designed to be open, we get in trouble basically on both that forums. But if it comes to the more detailed things after that, luckily there’s beautiful talks by different people that are specialized in that. And it’s a great thing to watch. Although as a developer you’ll learn lovely things about low level programming and I would really advise you to do it in general. So you can enjoy the ease of the work we have to do to develop our apps basically.

Speaker 5 00:44:22 So do you see a benefit, you know, and you mentioned the signed apps concept, you know, is there a benefit to the iOS, you know, almost walled garden to

Justin Beyer 00:44:32 The app store, whereas on Android, we’re seeing, you know, more of a wild West where signed apps as kind of a optional, additional feature. If you’d like to use it sorta kind of, whereas, you know, on the Apple side and you know, you will go through our app store and you will be signed unless, you know, excluding the concept of a jailbreak device.

Sven Schleier 00:44:51 Yeah. So that’s, um, that’s an interesting thing it starts with actually, let’s go a little bit into the walled garden. Cause remember we started talking about bypassing the TLS hygiene by installing that site loaded up. So there’s still a bunch of websites that offer installing side an app, an iOS in general, as long as you allowed a developer certificate to be installed or some moderate certificate, that’s signed it up and then you can still use those apps. Of course, that doesn’t mean that that app, all of a sudden gets control over the complete device or whatever, cause you still need to assess it rights or yes, it might help some other bipas that we don’t know about, which will get you into trouble. But if you look at what Androids is currently working with DS, there’s a much bigger amount of malware, uh, statistically, an Android and an iOS, for instance.

Sven Schleier 00:45:36 So in that sense, Android has a lot more security issues. So having a walled garden type of approach, where you have this degree view and you’ll come to download anything ELPs indeed quite a lot at the other end, we also see because of that, faith of developers into Apple will fix this. So why should I encrypt using key chain contact just used to share preferences? I mean, seriously, who’s Joe breaking his iPhone anyway. And then all of a sudden we forgot about things like Pokemon go, people being too lazy to walk around with GPS or going to Japan for the final Pokemon and still wanting to obtain those and therefore deal break it, then just, Whoa, there he is. We got the full open space and that simple site loaded up. We’ll give you all the problems you had before. So if your share to your users are not one of those, and that means you need to know your users could look with a multi million user app.

Sven Schleier 00:46:30 Yeah, sure. Then indeed you get certain guarantees because Apple looked at everything that got installed on the device. But if you don’t, which is exactly the case with a multi million user app, then the pitfall you have, if the security approach is that you think Apple took care of everything for everybody and nothing will ever happen. So why should I do this additional control? Why should I encrypt something? Why should I use a key chain specifically? Because we’re okay. That becomes a serious pitfall. The moment you start jumping into that, of course, if you still take security as years as you would have on a, on an Android device, then a walled garden approach gives you a lot of benefits in terms of the actual security of the run time that you’re running on, given the users indeed of dates, do their thing play nice. Don’t want to sit in their beautiful bottoms when they play Pokemon go, but you know, actually do stuff the way you’re supposed to.

Justin Beyer 00:47:21 Okay. So there’s benefits to the walled garden, you know, especially from a security, you know, application developer perspective, you’re doing checks for a jailbroken device. You’re doing checks for, you know, sideloaded applications and, you know, detecting that these things are occurring around your app to be able to say, hold on, wait, I don’t want to run on a jailbroken device because of our threat model that, you know, we don’t like our financial app running on jailbroken devices.

Jeroen Willemsen 00:47:45 It then of course there’s a problem with that. Unfortunate. So, well the garden helps in the fact that if everybody plays nice, we’re okay, but the moment people don’t play nice. They can also try to hide digital break detection. Of course. So they hide to do a break. They make it harder to detect and do other things because they still want to play Pokemon, go without moving around. They still want to be able to use their devices to use too while having to jailbreak. So as long as you can assume that it’s okay or, I mean, if you assume that people are playing by the rules, then having walled garden helps. But the moment, you know, but the moment you have the two bigger user base that can use this consumption app for virtually anything, then you know that this doesn’t help you for your specific app.

Jeroen Willemsen 00:48:29 It just only helps those that really play by the rules. So for instance, if you have like a corporate device that’s looked down and your corporate users only use it for business sign of constructive, blah, blah, blah, and really play by the rules. And I haven’t seen anybody who did, but let’s assume that we are then this helps, but until then you have to do the same thing as you would do in a normal walled garden environment. Take security serious, just make sure you implement those controls. And don’t think because Apple will review your app for not making some of the mistakes, but don’t think that the user might not have checked itself into do certain things through this device that makes some of the things you thought you didn’t need to implement very key to implement, to prevent that data leakage.

Justin Beyer 00:49:09 Okay, awesome. So just again, if you’re developing on the iOS side, just don’t assume that Apple’s fixing everything for you, you know, actually do the threat model for your application and decide what needs to be done. So now a little more into that and talking a little bit more about what you guys both do as your, you know, side project. What products have you guys created out of the OSTP mobile app sec project?

Jeroen Willemsen 00:49:32 So for this one, um, we need to go back maybe a little bit three, four years. Um, for this, I was still working as a penetration tester and one of my colleagues was Ben at Mueller or Ben. We were doing a lot of mobile application penetration tests. And we could see that as actually a lot of inconsistency in mobile testing, not only from the Pentastar side, but also on the developer side. So a lot of the things that we were already discussing today, there was not a really clear direction is SSL pinning. Now over an ability is student really should disable all the things in your web view, sort of a lot of things scattered around the internet and different, great blog articles and GitHub, and I don’t know great books, but there was not really an industry standard. And that sense in that sense.

Jeroen Willemsen 00:50:13 So we have the web application testing guide from overs that covers everything on the API, but there was nothing in terms of the, of the actual end point in terms of the mobile client. So there was a lot of inconsistency and we wanted to drive that change in that sense back then to become an industry standard, which I can say we also achieved in recent years. And while we are doing this project back then we had the mobile security testing guide, which was just a very verbose, technical explanation, how you can test certain aspects of certain parts of security in your mobile app. But while we were doing this, we could see that we are merging test cases. We are splitting them again. And we had a lot of inconsistency where we were writing this and then we were actually creating the MAs vs, which is the mobile application security verification standard, very clunky, very long word.

Jeroen Willemsen 00:51:07 So it’s the MSVs and this week we’re focusing first back then it was like three and a half years back in order to get all the different requirements. I mean, we have a I’m emphasizing and describing, not already quite a lot about them, about data storage, about the network communication, SSL pinning, web view and so on. And we were grouping all of these requirements together as part of the MSVs. So these are requirements that are always agnostic and this became now our baseline, you could say, and these are roughly 80 requirements at the moment. And once we had these requirements, we could actually start with the mobile security testing guide. So these are the main two projects that I’m here and myself and also others are working on. So we have the MAs vs that is summarizing all the different requirements, security requirements specifically for mobile apps.

Jeroen Willemsen 00:51:56 We have the mobile security testing guide that is outlining these different requirements and do technical test cases. And then we have the Excel sheet that is trying to bridge these things so that we have a link from the MSVs to the MSG. And on top of that credit, a few other things we created the overs, correct me is these are different mobile apps for iOS and for Android, where people can really get their hands dirty and try to hack those apps. Then we have the hacking playground that we created, which so the correct me, some of the reverse engineering, bypassing different clients at security controls. Then we have the hacking playground, which is again, something where you can try to get your hands dirty again, with the different test cases. And I think last year, heroin was also starting a side project in order to map the summary in Google flatter and all of these different things from the MSVs into these hybrid frameworks, because they are a bit different. I guess we can also put this into the shoulders so that people can also get an idea of that. This is, this is still a work in progress, but this is where we also want to map just not only to native apps, but also to these hybrid frameworks.

Justin Beyer 00:53:06 So essentially you have, you know, the verification standard is kind of the overall, you know, that’s where I think, you know, as her and I mentioned the leveling, you know, the different levels of that you would pick based on your threat model. And then you take the mobile security testing guide to actually test and verify that you’re meeting those. And then I think there’s also a checklist that you can use to ensure that you’re meeting the MSVs requirements.

Jeroen Willemsen 00:53:33 Exactly. Maybe too quickly elaborate because I was not mentioning that for the MAs VSE, which was mentioning the different levels. So in the MSVs we have to, we have the level ones and these are baseline security requirements. This means every app should have the security requirements built in. And on top of that, and this is now again, where we come to the threat model. If you have specific threats, then that’s the level of tool does, might be as pinning. And then we have another category which has resiliency against reverse engineering. So these are your root trajection, jailbreak detection, all of these things

Justin Beyer 00:54:05 I’ll have links to all of those things in the show notes. Um, her, and can you actually talk a little bit more about the, um, project that you’re working on a mapping, the standard over to those hybrid apps?

Jeroen Willemsen 00:54:16 So over the last one, half years, we got many requests from people like, Hey, how do I do this in the San Marino? How does it work in a particular Dover? And then funny enough, we actually see a growth in flutter base as well, where the same questions are being asked. So given that we are not hybrid specialists, because that means you have to dive into that stack very deeply and understand it, what happens? How does it get cross compiled? Whether it’s where I, I basically gave a shout out to the community like, Hey, what we’re going to do is the following. If you think this is important, spend some time with us. He already Google drive sheets. And if they dose, some of them are still quite empty, which shows that there haven’t been enough interest from the security community at least to give traction. So I hope, well, let me just do five seconds of your time. Dear listener. If you were a security expert, please go to the show notes and check those Google sheets. And if you’re really want to help your hybrid, developing colleagues start filling them in, we would love you to help us out. Thank you very much.

Justin Beyer 00:55:17 I will definitely include a link to those of the show notes. I also wanted to dive a little bit into the hacking playground. Is there any way that a developer could leverage that to, you know, help them understand, you know, the verification standard or what a pen tester is going to be looking for based on the testing guide and, you know, understanding, you know, this kind of code is bad, this kind of code is good and this kind of code could go either way. And this is how I would test for that.

Jeroen Willemsen 00:55:43 Yeah. This is exactly the intention. I mean, when we were starting with this whole project for your spec and the hacking playground was to smallest the output of our research research in order to demonstrate how bad code actually looks like sort of pen test does, of course know how it’s, um, how it should or shouldn’t look like. And, um, in the ms TG, we are bridging that actually usually in the, in the static analyzer section. So in the MTG, we always have the same structure. We have an overview where we just explain what does test case is actually all about. Then we have the static analyzers. We have actually making a deep dive into the different keywords that you should be looking for as a pen tester, but also give a best practice usually for the developers, what you should do in order to mitigate the findings.

Jeroen Willemsen 00:56:28 And then does the dynamic analyzers part, meaning what you as a pen tester can do to analyze an app while the app is running. So the hacking playground was just more or less DD things that you can use as a developer, or maybe also for security trainings to really illustrate how you can easily hack something. And just to demonstrate all the bad practices that you shouldn’t do so that a developer can just get the IPA or the APK can install it, play around with it. The source code is available on GitHub, obviously, so they can just change it in whatever way they want, maybe even fix the issues and see if the fix actually works, or if there’s still some way to bypass it. So it was just to get a kickstart for four people to get them started and play around with it.

Justin Beyer 00:57:12 Awesome. So it’s similar to almost in the website, you know, web goat or, you know, the Owasso juice shop, you know, hacking examples. You know, that way you can say, you know, this is what bad code looks like. This is what good code looks like. This is how you’d fix it. And then from a security training side, you can even take that and sit with the developer and say, Hey, like, let’s go through this code and see what’s wrong with it. Why it’s wrong? What would be broken? And then in that developers had, they can then kind of draw that line and go, you know, I used that function last week when I was writing some code. Maybe I shouldn’t have done that, that way.

Jeroen Willemsen 00:57:43 Exactly, exactly. And on top of that, we also have to correct me that that just purely focused on reverse engineering. So for the correctness, which are part of the MTG project, we also have a very good write ups because a lot of great people where I’m solving them and we’re writing those write-ups into the blog posts. So even though that reverse engineering is usually not a skill set of developers, but if you want to dive deeper into this, you can also just go to our cracked me page and go through the different blog posts because they explain it really in a lot of detail, how they were using different kinds of tools like EDA or Deidra or whatever in order to break those tools. And of course, Frida, which is one of the favorite tools for reverse engineers nowadays.

Justin Beyer 00:58:25 So essentially, you know, that may not necessarily be something that a developer’s going to do out of the box, but at least being able to go through these walkthroughs and say, Oh, wow, it really is that easy to reverse engineer these applications. Maybe I shouldn’t stick the secret in my, uh, you know, shared config space or in a plain text config file sitting in the application

Jeroen Willemsen 00:58:44 Folder. Yeah, exactly. There should be a lot of aha moments.

Justin Beyer 00:58:48 So just changing directions a little bit, we have the testing guide. We have, you know, the verification standard, you know, theoretically a developer’s going to sit there or, you know, a development manager or a project manager is going to sit with the security team and they’re going to say, all right. So based on the threat model that we’ve created as a group, you know, we’ve agreed that this project is going to have this verification level. You know, it’s gonna be a level two and we need to implement anti reverse engineering. And you know, we’re going to take these things and we’re going to take all these standards and say, let’s verify against this. How would I take those? And now actually integrate that into my project. Is that something that, you know, I’m going to stick in my CIC pipeline? Is that something where, you know, it’s going to be manual verification with the checklist and just going through and saying, yes, we’ve done this. Yes, we’ve done this. Yes, we’ve done. This

Sven Schleier 00:59:40 Really depends on the controls that you’re implementing. So let’s, let’s go a little bit back. Let’s again, take the TNS, uh, configuration. It’s a starting point. So it’s relatively easy to take your application and its run time. If you want to, or it’s just declined with which you’re connecting to and try to create an integration test where you basically offer something else and it should have been connecting to and see if it breaks. So for low speed thinks it’s easier to develop your own desks. It goes actually back to why should I do that? The moment you get your own tests that you can run on every change to see if you didn’t break the security controls, it’s the best thing you can do because the faster, the feedback you get, the better it is because the less time you’re spending on other things, and that comes together with the fact that the moment you are writing test for it as a developer, you start to really understand what’s happening because it might well be possible.

Sven Schleier 01:00:29 You’ve already seen through these podcasts that you’ve been assuming that TLS works in a certain way and that your did a great job in a configuration or that you did something completely wrong. But while you’re writing to tests, you might have find out something completely different because now you have to ask yourself what should have happened. What should have been the end game if I set this up this way and by writing your own tests, that is if you have to diamond vision for it, of course, it becomes a much easier to understand what you’re doing. And B it becomes much stronger to make sure that you don’t get into a messy situation later point in time, if you go one level further away, you get stuff like the BDD mobile security like DaVita created, which is an open source testing framework for a base on the mcg, which is beautiful because they let an external testing suites check, whether you implemented certain controls correctly, which means you get a little bit of less understanding, or at least you need to spend less time on that.

Sven Schleier 01:01:24 Or you can really focus yourself on getting that business value out instead of the security value bark. Um, and then one level further away, it’s just using more of a commercial suite that you don’t need to tailor, but thus some of the control testing. And then you have to make sure that it’s so gay. Cause you’re not really sure, but at some, for some it’s quite well for some controls, it’s finding effective. That’s our first uncle Charles. It would be quite hard. So if you want to be sure for instance that you didn’t store something, if we’re wrong, writes on your application, it’s very easy to have a scanner checking if you did something to DSD card or to public document directory in terms of secure storage validation, for instance. But if you want to validate where do you date your office location correctly? That’s going to be quite tricky because every time you open this gate or the office gate, I hope you have a different buff because if not, you’re not skating correctly. Um, which means that now all of a sudden you have to start, um,

Jeroen Willemsen 01:02:20 A very testing suite or just have more of a mental verification to see what’s happening because testing these type of things will be way harder. Of course. So, first of all, depends on how much you want to learn, how much you want to make sure you can get feedback ASAP, but you broke the control with some of your changes. And a letter of course is the type of control you’re verifying. So whether it’s an Altoona more difficult requirement or an enter reverse engineer require

Justin Beyer 01:02:45 Kind of going back to the start of your discussion, you know, by building tests, I have to actually understand the underlying protocols of what’s happening. Not just benefit from the abstraction and walk away and say, well, it was Apple’s problem to implement loss correctly. Not mine. I don’t care about what the config is, not my problem. All right. And just one last thing before we start to wrap the show, is there any benefit to taking, you know, and I kind of mentioned this earlier, you know, you know, you have your project management team and all that kind of stuff where they’re going to look at something like MSVs and, you know, decide what level we really sit at, you know, with the security team and deciding, you know, what’s our threat model, that kind of stuff. Is it something that you would want to leverage in your early architecture phase? Like how could I leverage these things in an early architecture phase? You know, when we look at like more formal things like, you know, SAB Saratoga for, you know, a less formal model, like, you know, I draw my UML diagrams and that’s our app architecture have a nice day. How could I leverage these tools to help improve the security from an architecture perspective, moving into a development?

Jeroen Willemsen 01:03:46 I mean, usually it really makes the most sense if you already use whatever the developers are using. So when you’re just creating your different use cases, then from a security point of view, you have cost just want to think malicious and make them abuse cases. So meaning you really start very, very early and just try what can actually go wrong. What can go wrong with the confidentiality, with integrity, with the availability of all the, of all the data of the network communication, what goes wrong? If this goes down and you really want to break this use cases into abuse cases, and this is where the MSVs can actually help you. And this is where we can, where we are coming back to the threat modeling. And what you actually do is you make different kinds of threat modelings dependent on your, on your different use cases to see what is actually working or not working.

Justin Beyer 01:04:34 Awesome. So, you know, you’re leveraging the standards to almost help
guide you from a security perspective to say, here are the abuse cases, or even as a developer, if you want to try to get ahead of the curve to say, you know, okay, well, these are the things that security is going to look for. So let’s think about, you know, how we’re going to want to implement these, how we’re going to want to implement it into our project management cycle, be it agile or waterfall, what have you, you know, and creating those use cases or abuse cases and, you know, integrating it into the actual project design and the architecture.

Jeroen Willemsen 01:05:04 Exactly. I mean, the, the thing is that as a developer, you most likely will not be aware of all these different texts and requirements, but this is exactly where the MSVs comes into the picture. I mean, I’m not saying you should read everything end end, but if you have this kind of use cases and you’re trying to think and put your hacker hat on, maybe let’s phrase it like this, and then the MSVs can definitely help you guide through. And this is where the levels also come in place. First focus on level one, then on level two, if you really have everything done and you’re paranoid enough, then go for the resiliency and the reverse engineering controls.

Justin Beyer 01:05:39 Fantastic. Just to wrap up the show and I’ll ask both of you this question starting with her own, is there anything that I didn’t cover that I missed that you think a developer or software engineer should know

Jeroen Willemsen 01:05:49 Thinking about it a little bit? Guess it’s been a wonderful show so far. Thank you very much. Now. I think you covered the most important topics. So like web view storage, uh, network security, um, how to use the tooling. I think those are all very important. One thing that we always, you know, talked around a little bit of course, is authentication authorization, of course not related strictly to mobile, but I think that’s, we’ll find a pitfall to always, uh, remember an app developer about the moment. Did you start talking to an API? There’s no difference between web and mobile that you have to make sure that nobody can read the messages of somebody else and this you really has to, so debt doesn’t change when you do mobile. I think that’s the only thing we missed other than that. Great. I think we’ve covered everything that’s important.

Justin Beyer 01:06:37 Yeah. And I’ve referred developers back. I know a lot of things are moving towards, you know, the overall standard of authentication on the API side and stuff like that. We did an episode which had a lot more detailed discussion on it. Episode three 76. And we’ve also done an episode previously, I believe it is three 83 with Neil Madden on securing your API, uh, where we discuss a lot more of those web security discussions around API APIs and what kinds of vulnerabilities existed and how you’d mitigate it and spend same question for you, you know, is there anything that I missed or that Hearne didn’t mention that you would want a software engineer to come away with? You know, after listening to this episode,

Jeroen Willemsen 01:07:16 I think we actually really touched almost all different areas. So I think that was a quite packed show. I guess the only thing that I would like to add is that in terms of tooling, in terms of security tooling, that can actually help you. I would say it’s quite limited and there’s not really. So what I mean by that is it’s just very, very diverse as we could already hear the beginning of the show. We have progressive web apps, we have native apps, we have hybrid apps. There’s just so many different kinds of apps. And then on top of that, we have so many different frameworks. We have react. We have caught a wave with Cameron and having now different kind of security testing tools of course is quite hard, especially if a big enterprise and if a lot of different diverse frameworks, then the tooling in that sense might be quite limited when it comes for example, to source code review. So I think the most effective way in order to really build security in start early, start with threat modeling and really identify what you want to focus on. And this is where you can really leverage on a lot. Fantastic. Thank you.

Justin Beyer 01:08:16 I just want to thank both of you for coming on the show and, you know, discussing mobile application security and all different kinds of issues and how we can leverage, you know, different testing tooling and, you know, integrated into the development life cycle. This is Justin Beyer for software engineering radio. Thank you for listening.

[End of Audio]

This transcript was automatically generated. To suggest improvements in the text, please contact content@computer.org.



Tags: , , , , , , ,