Search
varun-singh

SE Radio 573: Varun Singh on Evolution of Internet Protocols

In this episode, Varun Singh, Chief Products and Technology Officer at Daily.co, speaks with host Nikhil Krishna about the 30-year evolution of web protocols. In particular, they explore the impact of protocol ossification, which has supported the Internet’s success but also limits the flexibility of evolving protocol suites such as TCP/IP and UDP by constraining future development. Varun points out how the end-to-end principle emphasizes full flexibility for end hosts, but the TCP implementation in the OS kernel as well as in “middle boxes” such as ISPs contributes to the constraints of ossification by blocking certain types of traffic. Further, the development of new protocols is challenging due to the need for backward compatibility with existing protocols. They discuss Google’s efforts – and the challenges it has faced – in working to move the HTTP protocol forward. The role of standards bodies such as the IETF and collaboration between industry stakeholders is crucial for the evolution of internet protocols, requiring a balance between maintaining backward compatibility and introducing new protocols such as QUIC and HTTP/3 to address existing constraints and improve internet performance and security. indeed, QUIC includes features that seek to actively avoid ossification and encourage evolution.

This episode is sponsored by ClickSend.
SE Radio listeners can get a $50 credit by following the link below.
ClickSend logo


Show Notes

Related Episodes

Other References

Transcript

Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Nikhil 00:00:49 Hello and welcome to Software Engineering Radio. Today we are going to talk about the protocols of the internet. I’ve got the great pleasure of introducing Varun Singh. Varun is the chief products and technology officer at Daily.co. Bringing over two decades of expertise in video communications. He previously co-founded and served as the CEO of callstats.io, a leading performance monitoring service for web RTC. Varun has made significant contributions to the field co-authoring 12 IETF and W3C specifications, earning several patents in real-time communications and receiving special recognition from the ACM Multimedia for his PhD thesis, which improved web browser call quality. He earned his PhD in multimedia networks and the telecom from Alto University, Helsinki, in Finland. Welcome to the show, Varun. Is there anything that I missed in your introduction that you would like to add?

Varun Singh 00:01:48 No, thank you for having me. It’s a pleasure talking to you and to reflect on the last 30 years of the internet so, I’m really happy to be here.

Nikhil 00:01:56 Yeah, it is an interesting topic. So let’s just dive right into it. I think one of the first topics that we wanted to talk about was this concept of protocol ossification. So what is protocol ossification?

Varun Singh 00:02:09 So protocol ossification means the loss of flexibility in evolving a protocol and in this case you’re talking about a suite of protocols like the TCP/IP and HTTP. Although I would say that without protocol ossification we would not have achieved the success that we did in the intervening years. So I can talk a little bit about history and like why in retrospect we call it ossification, but while it was happening we didn’t consider it as an ossification.

Nikhil 00:02:41 Sure. But before we get into that, maybe some examples of protocols that have experienced ossification and why this is such a concern.

Varun Singh 00:02:50 It’s a good question. So if you think about when we talk about networking or the internet, we lot of times talk about the OSI model, which is like the seven layers, right? And if you think about that’s existed for almost 50 years now from since the sixties, seventies, since ARPANET. And then we had right TCP/IP, well first we had IP, then we had TCP and such. And in the 90s, actually in the beginning of the 90s when the web came around in ’94, which is the 30th anniversary, just this year, or actually this month. So if you think about when the web came around, there were so many methods of communicating on the internet, and for the web to win we needed ossification to happen — in retrospect, ossification. So what happened was we had IP — IPV4, specifically — IPV4 addresses that had to win.

Varun Singh 00:03:43 So once IP won, then the thing on top of it was TCP. TCP won the game against the other protocols, and then HTTP, which was on top of TCP, proliferated the internet, right? So today if you think about anything we basically everything is a web service and everything that was not aware, everything that we have today was also not a web service — email, for example. We don’t use any of the SMTP or any of those protocols today. We just use the web; you know, we go to website Gmail or whatever provider you have, open a web browser, and the server side actually still communicate in SMTP, but like, for us as users, we have stopped using that. HTTP won in some sense. So ossification is like the TCP/IP, HTTP that trio got ossified and everything is built on top of that.

Varun Singh 00:04:36 So everything that is built on that HTTP, TCP/IP foundation has to follow all the constraints of that product. All these three protocols. Now if, right, so if we start to build something new which needs to expand like the way HTTP or TCP or IP needs to behave, then you’re constrained. You can’t actually do that. And one good example of that was in 2010 or before 2010, when Web 2.0 started that movement, Google realized that, like Google was really concerned about page load times. They were very, very crazy about it. And that was one of the metrics which drove the idea that we had ossified because until 2008, people kind of knew that we had ossified but no one actually cared about that. It’s just that when we started to focus on page load time specifically Google, they realized that they couldn’t actually push it beyond a certain point because they were constrained by HTTP in its current format and then TCP below that. And if you wanted to reduce the page load times, then we had to kind of make some more changes on those protocols.

Nikhil 00:05:48 Right Now just to kind of explore that idea a little bit further also, right, so if it, let’s take the Google example, right? So if you’re thinking as a Google engineer, you have the Google web servers and the Google infrastructure that was already state of the art and then around 2010 they came up with the idea, okay, let’s build our own browser, right? So now you have the other end of the thing which is basically Chrome, and theoretically you could build in whatever protocol or whatever you wanted into Chrome, but they still were having this fundamental problem. And that’s mainly because of the pieces in-between. So maybe you could talk about a little bit about why that is kind of, I think, the secret reason why this ossification kind of happens.

Varun Singh 00:06:30 That’s actually a good observation. So if you think about the network and one thing that you learn in networking 101 is the end-to-end principle. And when you talk about the end-to-end principle, the end-to-end principle basically says that the two end hosts should decide what they want to do, they should have full flexibility and once they agree on something, they should be able to do it. And that’s what you asked, I said no, Google got Chrome, and Chrome gave them a big advantage of owning one side, the client side of the infrastructure, and they own the server side of the infrastructure themselves. So one big thing is the middle boxes, but more another thing to consider is that the TCP protocol itself is actually implemented in the kernel of the end host. So right, so you have the browser which has HTTP and all these new things.

Varun Singh 00:07:18 It has the TCP stack, which is implemented in the kernel; and TCP is also a little bit quirky. It kind of depends on which side you’re the center of the receiver. When we talk about web, most of the data is actually coming from the server side. So there’s a little bit more flexibility on server side to actually say, oh you know, I want to send a lot of data, can I send you a lot of data? And the client can say, yeah, send me all the, send me more data. So there’s some complexity there that they had to go around. But more importantly, the middle boxes, so one of the biggest constraints, why we have only TCP and UDP, we have also other parallel protocols, SCTP and such. But because the, we all are connected to an ISP, the ISPs are connected to something and then they’re connected to the Cloud, right?

Varun Singh 00:08:08 Typically, Amazon’s not directly connected to, there are all these layers of players in between you and like Netflix or Google and they all have infrastructure. And initially because of security reasons and malicious traffic, a lot of ISPs actually block a lot of the traffic, right? So they’re the first, they call themselves the bastions of security because they’re the first thing that can either ingress or egress from inside the network. They can stop what goes outside and whatever’s coming from outside can stop inside. And they decided in their infinite wisdom that they would only allow TCP to go on out because most of the traffic was on HTTP anyway, right? So basically you stop that and if you remember in the early 2000 there were attacks on SMTP because a lot of Linux devices and other devices ran SMTP servers locally on your, on your device. You didn’t even realize that.

Nikhil 00:09:06 And then we had the spam problem and you could run your own SMTP server and spoof anybody’s email and all that.

Varun Singh 00:09:13 Right and people were trying to get access through port 53, port you know, whatever ports were open. And if there was an exploit on a zero to exploit on some software that was no one is using but it’s still running because it came with whatever you got. You know, whatever operating system, basically the ISP has blocked almost everything. So you could not start adding a new protocol on top of IP, which totally allows there is an infinite space of protocols that you can add on top of, and there are like STTP and DCCP and there’s so many protocols there, but they can never traverse. They may run in your home network but they would never go outside of your home network onto the web for example. So the middle boxes were of course a bit of a problem because they were basically filtering the traffic and Google could not talk to all the ISPs in the world and the, so there was no, you know, economic action you know, you say oh capitalism would win in this case, but no, capitalism cannot, there are none of those two things. The people who want the service and people who are providing the service could not make this, you know, middle boxes to change middle

Nikhil 00:10:30 Layer.

Varun Singh 00:10:30 QUICly enough.

Nikhil 00:10:31 So that was one of the main challenges. And also the opportunity, because it was a standard network, you could, you could put anything on it and everybody spoke HTTP, TCP, we had this explosion that became the internet. I guess the next thing to talk about would be then, okay, so we have TCP and then we have HTTP on top of it. HTTP did evolve, right? So maybe you can start from what is the evolution of HTTP? We have now, I think the recent one is HTTP/3, but we had various versions of HTTP before that. Can you kind of talk about how that happened and you know, what were some of the features that each one of the new versions came up with?

Varun Singh 00:11:13 It’s important to reflect on how evolution happened. So, so when we had HTTP 1.0, it was just the first attempt of getting text across the internet.

Nikhil 00:11:26 Yeah, this was 1996 or something.

Varun Singh 00:11:28 ë94 it was public release, so that’s why 30 years.

Nikhil 00:11:32 Yeah, yeah. Oh right. ë94.

Varun Singh 00:11:33 Yeah ë93, ë94. It was I think already in CERN since 1991 I think, or even before. So it had evolved as part of CERNís need for sharing data for the physics experiments that they were doing across universities and their network of servers. And then it proliferated outside of that and when it proliferated initially, I think the simplest thing to say is it was very simple. It was text-based protocol, right? And it was like

Nikhil 00:12:04 Yeah, it was mainly for sharing papers between scientists. It was never meant to be a protocol for business.

Varun Singh 00:12:09 Yeah, you didn’t want to send a PDF across the net. So basically it was, you have text, you have some images, which is what makes up most of the scientific papers and you know, some level of interactivity. So you could have a talk, table of content, you could jump between, so you hyperlinking and because it, you didn’t need a flat file so you could organize data, right? Like in some more meaningful ways that you could have an abridged version and then you know, you could basically delve deeper. So yeah, HTTP then 1.0 and I think we didn’t come up with H1, H2, H3 this nomenclature until HTTP/2 because everyone before that just called it HTTP. And then 1.1 came because 1.1 was a big change because before that HTTP was just considered one protocol and then HTTP became two parts.

Varun Singh 00:12:56 One was the syntax and the other thing was semantics and syntax was basically the network and the format and what goes over the wire and all that stuff. And the semantics were basically like headers, response codes, what’s actually a method. So Git put all that got separated. So one became the semantics and then syntax. So you could have more, you could change methods, you could add more methods, you could add more response codes, you could syn HTTP1.1. The biggest thing that I remember was keep alive. Like that came through and then you know, this was around, so your timeline when you said, you know ë96, ë97, that was like the Netscape versus the browser wars. So that was the timeline when HTTP 1.1 came it basically almost all the Amazon, the Googles, they all emerge in that period of time.

Varun Singh 00:13:49 Yeah, super successful. And then you go 10 years forward to 2005, 2008, 2009, this kind of period where web 2.0 came and Web 2.0 and especially Google we talked about they were very concerned about page load time. So they started to push and if you think about page load time, you know you’re just sending a piece of a page and it could be a blank page, right? And if you just send a blank page and if you just look at the performance of HTTP with HTTP 1.1, HTTP/2, just send a blank page and see how, how QUICly it gets across, right? And then you can add more semantics on top of it. And you say, so HTTP 1.1 and HTTP/2, the big thing was that they wanted to, and actually before HTTP/2 there were like some experiments by Google called Speedy and how Speedy worked was that it was trying to be 1.1 and they tried to change, like I said before and you also remarked that they had Chrome the browser, so they could change a bunch of things inside their browser that could be much faster, you know like they had their own V8 engine and all that stuff to basically accelerate anything at the ends, but also the servers were sending bulk data.

Varun Singh 00:15:02 So if they could just send more data and somehow change the congestion control algorithm on their side to be more aggressive, they could, so they started to tweak a lot of things which you know, if the client was still using an archaic kernel TCP, they could still try to optimize from the server side what to do. But they realized that one thing that they needed to do was increase the initial window from the initial window of a TCP, like when you don’t know anything about anything, you send an initial window of two packets. So basically if you have to send something, you either send one packet, you know that you can start with one and then increase then send two packets and see

Nikhil 00:15:43 Increase as you go forward.

Varun Singh 00:15:44 Yeah. And so the most simplest algorithm is AIMD adaptive increase multiplicator decrease, which means that every time you get an act(?) you basically add one more, but you start with initial windows was two, so you start with two segments. After a lot of call like cajoling we increased it to four, they wanted it to be seven, right? So basically you could send four packets right at the front. So they did that. But that would require updates to you know, a lot of middle boxes and the end points and you know, if you make a new kernel today, it would still take two, three years before it become available. So Speedy started with the idea that there will be some changes that will happen there. But one of the things that were a big problem with HTTP 1.1 was that you could not do pipelining or multiplexing or parallel transmissions. So one of the things that we got in HTTP1, in HTTP/2 or H2 over H1 was that you could paralyze communication. So you could get chunks of multiple things simultaneously in one connection. Yeah.

Nikhil 00:16:50 So I think earlier before that I think they were having a browsers going, opening up multiple connections to one website just so that they could load the server fast. I mean the application.

Varun Singh 00:16:59 Yeah and there was also basically browsers were told not to open more than two or three or so there was also a limit on how many

Nikhil 00:17:06 Yeah there were, I think it was six, you couldn’t go more than five if you did zero to five. And after that basically you get throttled.

Varun Singh 00:17:13 Exactly.

Nikhil 00:17:14 So HTTP/2 rot in this concept of paralyzation and adding and kind of like streams into one connection. So that was in 2015, right? I think,

Varun Singh 00:17:23 Yeah. So basically they presented the idea in 2010 and I think in 2012 the community was sufficiently agreed upon that we need to do this and then two, three more years and then everything got standardized. But already in 2014, 2015 like YouTube was probably the first one which already moved. That was the biggest experiment. And I think already in 2012-13, they were already on Speedy for several years in H2, a variant of H2 at Google. And then when 2015 this said it was done, it was already Facebook, Google, Netflix, a bunch of them. I think in even my startup callstats, we start, we moved to H2 in 2017, 2016, so soon after. As a startup it takes a little bit more time because you firstly need validation from the market and you need a tool chain available, so you can actually be ahead of the curve.

Nikhil 00:18:21 So obviously this was an innovation over HTTP and the HTTP/2 came around and was 2017. And if you look at the difference between HTTP, which came out like you said in ë93 and then HTTP/2 came out in 2015, which is kind of almost like 20 years, right? And now immediately almost it’s, it seems very fast that HTTP/3 came out. So what was the main problem or what was the challenge with HTTP/2 that you know, kind of pushed them to say, hey no, we need to kind of solve this even better or this is not enough.

Varun Singh 00:18:55 So the Speedy proposal made a lot of changes or required a lot of changes and not everything that Speedy wanted was done mainly because, we were still stuck on TCP, right? So the changes they wanted to make to TCP were not really granted. And what we realized was that we can’t make changes to TCP because, and even if we did make changes to TCP, you would still have to go to BSD, you would’ve to go to the Azure would’ve to go to so many Microsoft, people who own the operating systems and right? You need a lot of people to agree on making those changes to happen. And one of the things that was in H2 which we kind of solved was the head of lighten blocking in HTTP. So you talked about streams, right? But one of the things that people noticed with streams was that you couldn’t actually make the servers go faster because there were still head of line blocking happening at the, at the TCP layer below.

Nikhil 00:19:50 Could you explain a little bit about that? Because as I understand it, head of line blocking is not entirely an HTTP problem, it’s actually a TCP problem, right? It’s because TCP is kind of sequenced. So maybe can we go into a little bit of why this is kind of

Varun Singh 00:20:05 Right. So head of line blocking means that if you have packet number three waiting to be sent and four is more important, you can’t do anything about that because three is still in the queue, right? And in the HTTP world, it was resources. So you were thinking like the main thing was that you could not download resource two or three until resource one was done.

Nikhil 00:20:28 In parallel

Varun Singh 00:20:28 Until one was right. And so you had these connections which you were, oh, use connection one to get resource one, resource two. But if all your let’s say you had four connections and all of them were stuck with resource 1, 2, 3, 4, you’re not blocked, right? you could not at some point a user click something and say actually I want to cancel everything. I don’t really care about these 1, 2, 3.

Nikhil 00:20:49 But what was this thing about blocking, right? So why was it so easy to block a TCP connection? Was it something, I mean because I remember back in the day when I was running, so TCP essentially was one of those things that used to guarantee it was a guaranteed form of communication, right? So you could basically, if you send a piece of data, it’s guaranteed to reach there and it was reached there in a particular sequence, right? So it was supposed to be that okay if you send three packets, packet one comes, packet two one, packet two and packet three, they will make sure that it comes in that and they’ll present it to you in that sequence. And I think that’s kind of one of the weaknesses also, right? That’s one of the reasons why there is this ahead of line blocking thing.

Varun Singh 00:21:29 Yeah. So think of this if you were just sending a text file, right? if you’re just sending a text file, basically it would start reading from the front of the file and then just basically all the lines would appear in order. Now if something gets lost, right? And you, if you got let’s say one line is one packet for simplicity for people, right? You have 1500 lines and you’ll send 1500 packets, one after the other. The simplest or get TCP or whatever, basically what you would say there are many ways to do this. You would send all the 1500 packets as fast, the other side would tell you what got lost. Then you would do a second pass and say, oh pack lines 4, 15, 16, 17, 18, 21, 22 got lost, I’ll send it. You’ll send them right at the end. But until you got to the end you could not present anything. Right?

Nikhil 00:22:16 Exactly. So that is the head of line blocking. So even so you have 90% of the file with you in the server. If for whatever reason one line was missing, you actually would not actually see it.

Varun Singh 00:22:30 So what I explained, what the simplest example I gave is how FTP servers worked initially. FTP servers just sent as fast as possible, reported what was missing, you sent everything. Once they had everything then you could see the file. So TCP or HTTP or whatever does it in a slightly different way because if you have three lines of the file, why can’t you show three lines of the file, right? Right. But if you got line five, you got three lines, then you got line 5, line 6, but you don’t know what’s in line 4, line 4 is missing. So it’ll actually stop at that point. Even though you have lines 5, 6, 7, it won’t render until it gets line 4. Right? And that’s the initial window that I was talking about before the initial window will grow. So as the window grows, there are more things in queue now if you’re more things in queue, you’re kind of blocked. You have 17, 18 and actually as a server you know that you’re sending 19 and you actually say maybe line 4 is, I’m still pending 4 but it’s not needed now. You know, for whatever reason it has some context. Yeah.

Nikhil 00:23:31 Maybe it’s one frame in a video or one, one pixel of a frame.

Varun Singh 00:23:35 Right, right. And it says that’s part of a different resource. That resource is probably under the, you’ve already scrolled down further that you don’t care about something there. Right? So the HTTP/2 solved the resource head of line blocking in some sense, right? So resources could come out of order and be, yeah.

Nikhil 00:23:53 But even then within a resource, if you had head of line blocking, that was a problem. And increasingly in the modern web, now we’ve got really large resources, right? You have one file which is a video, it could be a podcast for example, which is several megabytes.

Varun Singh 00:24:09 Yeah. And you like, streaming is its own other problem. When we can come to that there are other transports to that can help. But yeah, you’re right. There are resources that are big or there are too many resources, right? If you think about Twitter and this feed that you’re getting, each tweet could be its own resource, right? And you may have scrolled past a certain resource so you don’t really need that resource anymore but it’s still in flight. So yeah. So it’s to solve that and in H3 what they made the decision. So firstly we made the decision that we don’t want to use TCP, it’s too slow. Like innovation TCP is too slow to move forward. And so this work had already begun in 2000, I mean even before, we started working on H3, we started working on QUIC immediately in 2014, 2015 people started to talk about you know, we left some things behind in Speedy, we need to innovate.

Varun Singh 00:25:02 And a couple of working groups started inside the idea one was the QUIC working group and then the other one was TAPS transport services. And their idea was that we have now a problem, we have ossification, we can’t actually innovate on new protocols. We can’t build a new protocol. So the only protocol now we have is UDP and if we can innovate on UDP and you know we talk a lot about UDP networking one on one as connectionless. So basically you kind move some of the responsibility that you need that you got from TCP outside of TCP into UDP and you say I’ll build a layer on top of UDP that will do that same work. And that would be good.

Nikhil 00:25:39 Just in the interest of kind of making sure that you know, the audience is caught up, can you kind of quickly give us a high level overview of the TCP versus UDP? What are the differences and a little bit of history about, was UDP around when TCP was there?

Varun Singh 00:25:54 It was earlier, I think it was few years earlier was UDP. So UDP is what we call a connection less and TCP is called connection oriented. I think for if you’re a beginner it makes a lot of sense. But as you grow into the world, that separation only means that UDP comes with very few services. So if you’re building on top of UDP something, you take responsibility to build all the services on top of it, right? And so TCP, what it gives is it’s a connection oriented. So what does it mean? It’s got a handshake. So it talks to the other side says it sets up the connection and says, hey, we have a connection now established. Now you can send data over this connection, right? So you have a four-way handshake or a three-way handshake, TCP synact (?), people talk about that. So you do that handshake and then you say now we have a connection we can set up or we can now if you, you can do that on UDP as well. You could create your own handshake.

Nikhil 00:26:50 But you’d have to build it yourself. It’s not part of the protocol, right? Okay.

Varun Singh 00:26:52 So basically UDP is bare bones.

Nikhil 00:26:56 Even more bare bones than TCP, right?

Varun Singh 00:26:58 And it’s actually quite difficult to get something built on UDP and pass like, you know get an RFC because there are so many people who will look at it and because you’re building, you’re not using any of the services that are already available on other protocols, then you have to resolve the problems that are solved problems. So a lot of people did not want to build on top of UDP for obvious reasons because they would always get into this problem of solving solved problems, right? So when we came to do QUIC, it became very obvious that the solve problems that we had that TCP offered what actually now constraints.

Nikhil 00:27:40 Becoming kind of handcuffs. That they were constraints.

Varun Singh 00:27:43 They were constraints and you could not innovate beyond, so basically you had to resolve the solve problems knowing what the edge cases were, right? And now the edge cases were big enough that you would say we have to go solve this.

Nikhil 00:27:56 So when we get into HTTP/3 and QUIC, what was the thinking around how to actually tackle this ossification issue? Has there been identified or is there a zero strategy within QUIC that will prevent this particular thing from happening with your UDP?

Varun Singh 00:28:13 Yeah, actually that’s also a very good question. So QUIC has a versioning and you don’t, basically if you think of version numbers, most of them have two bits. If you go down to the protocol then you say how many bits do I have? you have two, four bits, you have a lot more bits right on QUIC till, so basically there’s a version of QUIC that everyone agrees and that’s let’s say version one, that version of QUIC is what everyone’s using today. But you could come up with a new version of QUIC and you know, firstly you would probably need something a browser or an electron app or whatever. Today you have native apps so you could actually have your own version of QUIC which would be slightly different.

Nikhil 00:28:52 Which would be for your data.

Varun Singh 00:28:53 Yeah. And if you own the server side, if you own both sides you could have your own and you don’t have to talk to anyone, right?

Nikhil 00:28:58 But you still have the middleman problem don’t you? We still have the wouldn’t the middle boxes basically also prevent you or they’ll say okay,

Varun Singh 00:29:06 Really good question because one thing that we skipped over that we talked about earlier, why it took us another 10 years from Speedy to H3 and why we had to do H2 was we had to convince everyone in the world that we had to allow traffic over UDP. Because if you think of lot of networks, even some ISPs drop UDP, you can’t actually send UDP across the internet. Partly it was Web RTC, which we’ll talk about in passing Web RTC emerged as a web real time communication, it already required UDP and it fought, you know, for many years to, get UDP across a lot of networks. And even today, UDP gets blocked if you’re a hospital, if you are a bank, you are some kind of enterprise environment, it’s quite possible UDP gets blocked and then what you do is you go over port 443. So you masquerade UDP as a TCP transport and you basically find the nearest top that you can go to over TCP. So there are ED servers, you connect over TCP, put your UDP packets over TCP, funnel it through that boundary and then the other side it goes over UDP.

Nikhil 00:30:17 And then on the other side go back to UDP. Right. And QUIC is kind of built on this UDP right idea, right? So fundamentally the middle box is basically the ones that we’ve Web RTC opened up for us, for H3 basically allow UDP and they don’t actually realize that this is not Web RTC, this is actuallyÖ

Varun Singh 00:30:38 Yeah, it’s all encrypted. So it doesn’t know, basically itís a UDP connection and now if your YouTube stops because it’s on H3, it can fall back to H2. But I think with time will realize that the performance of H3 is so much better. Like Netflix and all these people will require, people who have relationships with ISPs where they find this UDP is blocked, will push really hard to get it unblocked, right? Because they would say, hey if let’s say some geography in the world is having poor performance of YouTube or Netflix, people will basically want better services. And if, you know, we spend so much time today on more than video calls. We spend more time on HTTP related services and if they’re actually degraded because they’re on H2 instead of H3, let’s say VR, you know, there are bigger things that are coming that will push the boundaries. So, I think the doors open, it just took us maybe five additional years because of this.

Nikhil 00:31:37 Right. So yeah, the doors opened. And I think you mentioned also one thing that QUIC was encrypted. So when you think about HTTP, we have this actually in between layer. So you have IP and then you have the TLS protocol in between the TLS layer for doing the securing and then you have HTTP on top. Is it the same thing with QUIC or is this UDP, TLS, and QUIC or is this something that QUIC does better?

Varun Singh 00:32:03 No, I think the TLS is part of QUIC, basically TLS, DTLs, TLS datagram, TLS. So the main thing is that HTTP 1.1 did not mandate TLS and I think until 2010 it was not even very common to see HGTPS on the internet. Unless you were on a credit card site or something, you would see this lock icon appear and everyone was told to look for the lock icon. And then there was a big movement towards TLS everywhere or HTTPS everywhere. And I think it is only this year or last year, they claimed that a hundred percent of the web is now on, you know, 99.9, never a 100%. There will always be one side where it doesn’t do something. But you know, we are at four or five nines now that everyone has SGDP certificates. And actually it’s a very good question because in 2010 if you said how I would get an HTTP certificate, it was very expensive. It was $600. And most people who wanted to run a website, even if I cared a lot about security, if I was not doing credit cards that there was no way that I would spend $600 for a certificate. So let’s encrypt.

Nikhil 00:33:16 And then that was a limited certificate as well. It was not even a wildcard certificate.

Varun Singh 00:33:20 Yeah, you got a very specific, yeah and you bought a wildcard, it was even more expensive you know, 5,000 or 10,000 or something.

Nikhil 00:33:27 Yeah. Yeah. It was crazy. Yes. So yeah. Okay cool. So we have encryption in QUIC as part of the TLS thing and I would imagine that that kind of helps also because it prevents the middle boxes from actually seeing whether this is a QUIC,

Varun Singh 00:33:42 It’s a losing game. The Ciscos and the Junipers and everyone, I think they’ve fought really hard to protect their business. But it’s also, if you think about it took us 10 years to kind of navigate around them and now have, you know, UDP and it took them 10 years and if you think about what is the sunset period of someone who bought infrastructure is 5 to 10 years, right? So basically they, right, they dragged their feet enough to kind of make sure that they were not caught off guard and they built stuff in parallel to make sure that, you know. And now they’re built in with AIML all the security features that you think about, they’ve been looking at the internet and said, you know, if everything’s encrypted we will start to do like ML based encryption detection so they’ll never decrypt. But they’ll just say, oh you know, this type of traffic is X, it behaves something

Nikhil 00:34:31 Exactly. Traffic shaping and monitoring and stuff like that.

Varun Singh 00:34:33 They’ll look at traffic analysis, monitoring and stuff that and look at destinations. And actually this is why it’s very important for us to move to V6, IP V6 as well because V4, you know, there are only so many IP addresses and if you know the IP address belongs to Amazon, in some cases you actually know what could be running on top of it. So if you get V6 then you actually make this problem much harder because V6 has almost infinite addresses, you know, billions of addresses. And we also, we are 7 billion people, so you know, we need probably thousand devices each. And we are coming with Iot, you know, everyone has so many devices at home.

Nikhil 00:35:12 Yeah, I imagine IPV, V6 is definitely, I think one of those things also that took a long time to kind of get in and even now is kind of still slowly, slowly rolling out. Right? But it is still part of the IP protocol. Correct. So if you’re moving to UDP and you use going to, assuming that every HDP three takes off, does that really matter anymore? Is that IP still? Well I guess IP is basically the addressing so you, it doesn’t really matter. You still need, you need the addressing.

Varun Singh 00:35:43 Yeah, you need the addressing, you need some kind of addressing.

Nikhil 00:35:45 Exactly. Now we talked a lot about the middle box problem, the ossification problem and the versioning in QUIC and the encryption, which kind of helps solve that ossification a little bit. But what else does QUIC actually have? What else does HTTP/3 bring to the table which makes it more compelling than HTTP/2?

Varun Singh 00:36:06 I think I should, we should disambiguate the fact that HTTP is not HTTP/3 runs on QUIC but QUIC is lot more than HTTP/3. It’s not just the thing that would be on top of QUIC. Cause you know, you could easily build an application on top of QUIC which is not HTTP, right? You would get a lot of the same benefits. So you could take about, can we run web RTC on QUIC for example, could we run, we talk about streaming video streaming, right? And there are things like Dash and HTTP based streaming. Can we, do we need HTTP now that we have QUIC can we put media over QUIC?

Nikhil 00:36:46 Directly do streaming on QUIC itself?

Varun Singh 00:36:48 Yeah, we had this of using 25 years ago. Right? If anyone’s used real media player.

Nikhil 00:36:52 Exactly. That was a proprietary.

Varun Singh 00:36:54 Right. Then protocol you had RTSP and you had RTSP ran over to UDPN, you know you had real time streaming protocol that was available. But because of Adobe and Flash, actually people started to use Flash video and then basically when Web 2.0 came and Apple I think it was mainly Apple said you know, we will not do Adobe.

Nikhil 00:37:16 Yeah it is very famous player, Steve Jobs said that we wonít support Flash.

Varun Singh 00:37:19 Right. And then basically then all those things said, Flash and then we lost it.

Nikhil 00:37:21 Right, right. Yeah, it vanished overnight.

Varun Singh 00:37:26 But I can talk a little bit about the features if you are interested in. So QUIC offers, so when we had HTTP/2 there were a lot of features that were part of HTTP/2, which we moved into QUIC, right? So basically now H3 becomes simpler. If you go back to that semantics and syntax. Semantics and syntax becomes easier because you move a bunch of things down. So for example, the TLS part, which was really important, and you needed it for H2 was not mandated but people wanted it moved below QUIC. So you don’t have to worry about encryption and such. It’s already guaranteed below. If you want to do end-to-end encryption and some form you can do it anyway on top, but it’s not really needed. It’s the transport is encrypted. The other thing was zero RTT connection.

Varun Singh 00:38:12 So, basically if you look at session establishment we talked about earlier that there was a three-way handshake or a four-way handshake. And already in H2 we had a zero RTT. But really in QUIC we got it for sure. Basically if I’ve talked to you before, can we share some key material from can I tell you that, hi, we talked before and start sending you data, right? That’s zero RT deal. So the first time you meet an unknown thing you would have to do all the four-way handshakes and everything. That’s not, but the second time you meet you can you do it faster? Do you have to do the four-way handshake this time?

Nikhil 00:38:47 Right. So assuming you have a cryptographic key from your previous conversation, you can continue the conversation.

Varun Singh 00:38:53 Yeah. Or you have some kind of key material which would say hi and I can challenge you and say are you the same person? And then because there was something that was shared before, you could say you could answer that challenge because you had something before.

Nikhil 00:39:06 Correct. Okay.

Varun Singh 00:39:07 So, that was one. The other thing that we got was the congestion control. We talked about this the initial window. And we talked about this IMD, which was the very simple algorithm no one uses that people use things cubic and better algorithms. There’s a new one that came out from Google called BBR, bottleneck bandwidth production. So BBR basically allowed faster shaping of traffic. So you basically got that in QUIC, you got customized congestion control. So if you don’t this you can build your own. Again, you don’t have to wait for a new version of the kernel head. Offline rocking we talked about another one was migration. So if you have, if I’m on my phone and I walk into wifi and then move to, 4G and back, I can do connection migration without renegotiating the session.

Nikhil 00:39:56 Oh that’s interesting.

Varun Singh 00:39:57 Because it’s the same zero RTT benefit idea.

Nikhil 00:39:59 You still have the same connection id and that’s since that is abstracted out of the whole UDP stuff, it doesn’t. That’s interesting because I think in TCP we have the header over there, right? That with the address and it’s kind of tied to your IP address.

Varun Singh 00:40:16 Exactly. So yeah, and you can just basically say hi again, right? And you’re just, uh, you’re oh you are that. And then you’re okay, you’re just coming from a your,

Nikhil 00:40:24 So is that kind of, wouldn’t that kind of constitute from I guess a certain perspective people being able to track you as you move across different networks?

Varun Singh 00:40:33 I mean that server would anyway be able to track you nonetheless, right?

Nikhil 00:40:37 That is true. Yeah. Yeah. Cool. Yeah. Even when you eventually reconnect, you’d probably be sharing some cookie or session or something, right?

Varun Singh 00:40:45 Actually you moved it lower so it becomes harder for, because you get this for free in some sense. Like a lot of that tracking that you were doing inside the application, there is no need for it. Right? So if you were doing any kind of tracking for reestablishment or connection, anything in service of providing better service, not just for ads and such because in the case of ads you definitely need it but if you push that down as an application now you don’t care about it. So you’re not actually tracking at all, right? The thing that is tracking is not actually,

Nikhil 00:41:16 Yeah, you can get, you can still provide a good service even without having to put that ID into your application. It’s part of the protocol.

Varun Singh 00:41:23 And because of the head of line blocking, what that means is that you have partial reliability and you can have something partial reliability, right? So you can just say disconnect. I don’t want that resource. So you can just say stop on many resets on any of these resources. And earlier research was only on a connection. So you basically if you said reset you’re saying goodbye but you’re not saying I don’t want this anymore. Right. So basically you could just say I can move on to the next resource I can skip.

Nikhil 00:41:52 Oh okay. So basically if you’re, okay, let’s put it in video to suppose I want to, I’m streaming a video and I want to pause this video or stop this video and then move on to another one. It doesn’t mean dropping the connection and you can just stop that video stream.

Varun Singh 00:42:07 Yeah. So within a stream it’s in order, but because you have multiple streams you can actually basically say stop this stream, get the other stream you can reprioritize. So you get a bunch of these things and you’re right, so that’s why QUIC is so interesting for a lot of us because of course if you’re building YouTubes and such and you’re already on HTTP/2, you want to use the same kind of semantics and syntax. But like now, you basically change the underpinnings without changing the application, right? So for that reason H3 is really compelling but it’s also compelling for other things like web transport for example. Which is it’s own new protocol which basically says that I want to use QUIC, but I don’t want to use the HTTP semantics. I want to just, you know, server initiate web transport, I want to just send data stream new data. And if you’re streaming data, not necessarily video, then you need something a push in the old world. But if you have this and you can just open a connection like a web socket and just say boom, I’m going to just send you stream data and I’ll open streams and close streams, then you can just use web transport for it.

Nikhil 00:43:13 Yeah. Cool. That sounds very compelling. So how do you actually see this, shift happening? I mean, you probably are in a unique position with your company and your background to kind of see so how much in your roughest would be in the real world today? How much of my daily life is probably being run over H3 and I don’t even know it. And a good bit of it I imagine.

Varun Singh 00:43:40 Facebook, YouTube, I’m not on Facebook so I can’t say anymore but like, there are lots of the big services are most.

Nikhil 00:43:47 Also the Google properties. Yeah. Probably already do H3. Correct.

Varun Singh 00:43:50 But the ones that save the money, if you just think of it from a commercial or a capitalist perspective, it’s going to be YouTube for example.

Nikhil 00:43:56 Yeah. So on the video side, is it not that compelling? Do would I expect Netflix or yeah, that would also be,

Varun Singh 00:44:04 Not yet. But they have plans I think almost every video provider has plans, but they’re also thinking of media over QUIC because it kind of opens a little bit more opportunity If you think about HTTP as, like the ossification on HTTP and now you have a QUIC do you really want to be still on top of. . .

Nikhil 00:44:23 On HTTP or just create your own streaming kind of thing.

Varun Singh 00:44:27 On top of QUIC And can we you can certainly use QUIC version one what everyone’s using or you could have your own QUIC. You know in the case of Netflix it makes sense. You have a Netflix app more or less all of us use some kind of app either on an Android device, which is a Fire TV or Chromecast or whatever. Or the television itself, which is an Android box.

Nikhil 00:44:47 Yeah, you can’t get around the Netflix app because of DRM, right? It’s an encrypted stream anyway. And so you said, they have the client and the server on these things.

Varun Singh 00:44:55 I mean you can do it on the web with the EME extension, the encrypted media extensions. they can load the DRM and make sure that you don’t steal it. But yeah, that is true. They own both ends. So it’s fairly easy for them to start innovating. And I think there’s a new activity called Media Over QUIC that started only recently in last year. And one part of that problem is if you think of RTMP, and I don’t know how much our listeners know about it, but itís basically, Real-time media protocol for, you know, if you have a webcam, if you have a drone, they all have these RTMP endpoints, which you know, you can stream into. That console or wherever.

Varun Singh 00:45:35 If you have a security camera all of them do RTMP and all the live right? So a lot of these YouTube live and such Twitch, a lot of them allow for RTMP endpoints. So if you don’t want to use YouTube directly to create your media, you want you use OBS or something that, then you can RTMP into YouTube live or into Twitch. And for example, at Daily we offer, you know, you are on a web DC call and if you want just people to listen to you in that web RTC call, let’s say all hands, you know, you have 10 people and you want hundred people to, or thousands of people to watch the thing. You don’t want them all on the call. You can move them to RTMP. Now RTMP for example, is an ingest protocol as they say it. So if you’re BBC or you know NFL playing a game and you want to broadcast that the stadiums will have these RTMP cameras and cameras

Nikhil 00:46:26 Will stream into, then you just subscribe to them.

Varun Singh 00:46:28 Yeah. We’ll go to Sports Center and then from Sports Center they would go out. So the idea right now is can we solve the ingest problem with QUIC? And then you get basically new services on top of QUIC media. Were QUIC with this. And then you have web RTC on the other side. So basically you could scale out to hundreds of people or millions of people with web RTC.

Nikhil 00:46:51 Actually since we’ve been talking about web RTC, I think it’s a useful way to kind of round out this would be to talk about web RTC and it’s rise and thing. Cause, I remember coming across Web RTC in the late 2000. It was 2010, 2011, 2012. I read this article and I was, wow, this is kind of a really cool thing because at that time, basic doing video calls and video calls and video chats and stuff that was not common, right? It was a tough thing to do and you had a lot of hacky ways to get streams together and it was unreliable and things used to crash and browsers used to crash and stuff that. And Web RTC came at that time. I was thinking about is an, it actually initially came out as a peer-to-peer protocol and I think it still is an underlying, you can technically do a peer-to-peer connection between my browser and somebody else’s browser directly.

Varun Singh 00:47:46 I know. It’s going through peer-to-peer this call we are on is,

Nikhil 00:47:49 Yeah. So at that time that was a very normal concept because, and then so I was very interested. Can you kind of give, I mean given that your sense of background and my part to see, I was just wondering maybe you can give us a QUIC overview of how it started, and you know where it’s reached and where do you see this fit into the whole internet story?

Varun Singh 00:48:08 You’re correct that video calls have been around since the late eighties, early nineties. There was a program called WIC. The main thing to think about, the first 20 years of video calls was all about specialized people, specialized hardware, people who solved a problem. And there were specialized companies. Peer-to-peer aspect was actually Web RTC was not that novel because you had Skype and some other technologies which were similar, which were already peer-to-peer. Yeah. And this kind of comes down to, you know how the networks evolved. So in Napster came out in what, 2019, 2000, 2001, that period? And that was peer-to-peer music sharing, right?

Nikhil 00:48:47 Yeah, but that was also kind of a proprietary protocol. It was not, or HCDP, I think what Web RTC brought it was, or HDP.

Varun Singh 00:48:54 Right. So basically there was this peer-to-peer stuff that was happening on one side. There was all the video stuff that was happening on one side and Web 2.0 when it started. And Google basically brought Chrome and said, now I have Chrome and we have the Chrome browser, which will become the Chrome OS that already seeded that idea. And then they said, what are the services that are missing on the Chrome browser that are available on an operating system, right? And basically device access is one and the other one was Real-time communications. Right? Those two things. So you did not have access to a Bluetooth device or anything, you didn’t have mouse pointers. So they spun up inside the W3C, they spun up a device group which basically said, I will find ways to get access to peripheral devices that you would need access to from the browser because the browser was actually never allowed to have access to anything. Which is local, right?

Nikhil 00:49:48 Yeah, true. It does the sandbox.

Varun Singh 00:49:49 The sandbox. And it could never access, it could only write, right? it could only write a file. Basically it’s you download a file; it could just download it but nothing else. It could not read anything from it. So if you think about camera and mic, that’s a change in behavior. You’re now reading something, right? And you could always render, you had the video and the audio tags available. So you could always render, but you could not read. So that was one change. And the other thing was that as soon as you could read something, then you would say, why can’t I send my video across the internet? And web RTC was basically not a new, people talk about it as a new protocol. My opinion is that it was a snapshot in time of the existing protocols that existed at that point existing.

Varun Singh 00:50:30 And they basically said, here’s a set of profiles across the whole stack that we are mandating. So what Codex H264 VP8. Okay. Next thing RTP, what flavors of RTP, what flavors of encryption over RTP? RTP being the Real-time protocol that carries the audio video. Right. So basically if we think of that as a pipeline, you’re like, oh, we now have standard interfaces in terms of API to access the device. We have standard protocol to send the media across the internet. And we talked about NATS and firewalls and middle boxes. So you know, if you’re really peer-to-peer, you have to be able to open the ports. And there were already protocols for that. So it basically said we will use this flavor of X, this flavor of Y, this flavor of Z, and this became the web RTC and it was really pushed by Google.

Varun Singh 00:51:20 But then when Google basically put the word out, then everyone was excited, right? Ericsson was excited, Microsoft was excited because they’re like, yeah, we’ll do this. And we went out and did that. And I think the first spec came out in 2012-13, the first draft as we call them. And personally I thought, you know, we would have a big boom just with the web. So very clearly 1993 the web started, let’s say, and by 2000 was big 2012-13 web RTC started. And the idea was that, in my opinion, I’m still ___(?), I think it was proven eventually that web would, a web RTC would become the dominant protocol for video. And it did, it of course took a pandemic for all of us to do like a hundred percent of our traffic became web RTC at one point because everyone was on calls for eight hours, right? So it grew dramatically during the pandemic, but it was already really big before 2020.

Nikhil 00:52:20 Right. So how do you see this going, the road ahead, right? So do you see web RTC now taking or going into QUIC and getting, taking, getting in more and more integrated into that side of things?

Varun Singh 00:52:33 Yeah, so it’s a suite, right? So web RTC is a suite, so as long as you don’t change the API and we just open another a just set a property in web RTC which says use QUIC, then we can just go down the pipeline and change a bunch of things. And you know, to the developer it would not matter if it’s QUIC or if it’s pure RTP over UDP. I think the main change that we are seeing now is what we would call unbundling of the API because the web RTC AP is very, very opinionated. it was built for a certain set of use cases, as you said, it was peer-to-peer. But then we realized that for larger set of calls, the other peer can be the server and then the server can basically, so it’s back-to-back web RTC connections in some form.

Varun Singh 00:53:18 And then you can create, you know, whatever topology you want, you know, you can make a star, you can make a handlebar, you know, two big servers and multiple fan outs on that side or some kind of mesh. So for daily we have a global mesh network which connects everything and you will connect to the nearest server, I’ll connect to my nearest server and you know, if there are hundreds of people, they’ll all connect and then we’ll just transfer media across. So QUIC can play an important role here for sure. I think the biggest thing about the unbundling is because of machine learning now, we need to unbundle in the middle. For example if I wanted to do voice recognition or remove the voice or, you know, background noise removal or blur my background, unless the camera provides these services, I have to run some kind of custom machine learning right at the ends to be able to do that. So there are things like the, there’s now a web codec script. So basically you would have the device captures codec, which are not from the camera or from the operating system or inside the browser you can load your own codec. So you know Baron’s codec for example, and then you encode it and then decode it on the other side with the Baronís decoder, right?

Nikhil 00:54:33 Right. And the codec will basically be built with the blurring, for example, right inside it. So you don’t have to depend on a device or an endpoint. As long as you’re using that codec, you know that it’s going to be ___(?). That’s pretty cool. Cool. So we spent about a good hour for this and just wanted to kind of look back and ask you, is there something that we feel that we have not covered that we should cover? Or do you think that this is a good overview?

Varun Singh 00:54:59 I think we covered quite a lot.

Nikhil 00:55:00 You know, you said 30 years.

Varun Singh 00:55:01 Yeah, we covered quite a lot. We covered three protocol families, you know, HTTP, UDP, TCP, and Web RTC and on top of that. So we covered a lot of ground. I hope the listeners find some value out of this whole conversation.

Nikhil 00:55:18 I’m sure they would. I at least found a lot. So you have one person who has learned a lot, so thank you for that, Varun. And yeah, is there any, anything that you would want to talk about your company or,

Varun Singh 00:55:30 I can give you a couple of lines about Daily. So Daily is a video API provider. So if you want to build a compelling user experience on top of video, this may be something which is just like peer-to-peer or one person or few people talking to thousands of people. You can build that with a few lines of code. We give you all levels of abstraction. So you know, no code, low code, mid code, full code, and the infrastructure that goes with it. So yeah, we want the API to be the most flexible API for most use cases. So we want to cover if there’s a video use case that you have in mind and our API cannot solve it, we would love to talk to you. So get it inside the folder.

Nikhil 00:56:15 Cool. Is there any way, I mean obviously I will be putting your LinkedIn profile and Twitter on the show notes. Is there any other way that you like our listeners to, contact you?

Varun Singh 00:56:27 I think Twitter or LinkedIn is also a good place these days.

Nikhil 00:56:29 Right, okay.

Varun Singh 00:56:30 Emails. I can give you my email and then we can, you can share that.

Nikhil 00:56:35 Yeah, sure. I’ll make sure that it’s in the show notes. Well, in that case all that remains is me to thank you Varun. It has been a great hour. I didn’t actually even notice it. An hour has passed. So thank you for the time and the insights and have a great day.

Varun Singh 00:56:50 Yeah, you too.

[End of Audio]

Join the discussion
1 comment
  • Hey team Software Engineering Radio. I listened to this episode on the Evolution of Internet Protocols today. SO GOOD. Really loved it. First it was a real trip down memory lane, and then it was a great wake up call for me to brush up on QUIC. Thanks for the great show.

More from this show