HTTP 3.0 Completely Abandons TCP: What Went Wrong?

HTTP 3.0 Completely Abandons TCP: What Went Wrong?
Author l Hollis
Source l Hollis (ID: hollischuang)
HTTP 3.0 Completely Abandons TCP: What Went Wrong?

Since HTTP/1.0 and up to HTTP/2, regardless of how the application layer protocol has improved, TCP has always been the foundation of the HTTP protocol mainly because it provides a reliable connection.

However, starting from HTTP 3.0, this situation has changed.

Because, in the newly released HTTP 3.0, the TCP protocol has been completely abandoned.

TCP Head-of-Line Blocking

We know that during TCP transmission, data is split into **ordered** packets, which are transmitted over the network to the receiving end, where they are reassembled **in order** into the original data, completing the data transmission.

However, if one of these packets does not arrive in order, the receiving end will maintain the connection and wait for the packet to return, which will block subsequent requests. This is known as **TCP Head-of-Line Blocking.**

HTTP/1.1’s pipelined persistent connection allows multiple HTTP requests to use the same TCP connection, but HTTP/1.1 specifies that a domain can have six TCP connections. In HTTP/2, only one TCP connection is used for the same domain.

Therefore, in HTTP/2, the impact of TCP head-of-line blocking is greater because HTTP/2’s multiplexing technology means that multiple requests are actually based on the same TCP connection. If one request causes TCP head-of-line blocking, then multiple requests will be affected.

TCP Handshake Duration

We all know that TCP’s reliable connection is achieved through three-way handshaking and four-way termination. However, the problem is that the three-way handshake takes time.

The TCP three-way handshake process requires three interactions between the client and server, which means an additional consumption of 1.5 RTT.

> RTT: Round Trip Time. It refers to the time it takes for a request to be sent from a client browser to the server and then back from the server as a response. RTT is an important indicator of network performance.

In cases where the client and server are far apart, if one RTT reaches 300-400ms, then the handshake process will seem very “slow”.

Upgrading TCP

Based on the two issues we mentioned above, some have suggested: since TCP has these problems and we know they exist, and even the solutions are not difficult to think of, why not upgrade the protocol itself to solve these issues?

In fact, this involves a problem of “protocol rigidity“.

To put it simply, when we browse data on the Internet, the data transmission process is extremely complex.

We know that to use the internet at home, several prerequisites must be met. First, we must open the internet through an ISP and use a router, which is an intermediary device in the data transmission process.

An intermediary device refers to an auxiliary device inserted between the data terminal and the signal conversion device, completing certain additional functions before modulation or after demodulation. For example, hubs, switches, wireless access points, routers, security demodulators, communication servers, etc., are all intermediary devices.

In places we cannot see, there are many such intermediary devices, and a network must pass through countless intermediary devices to reach the end user.

If the TCP protocol needs an upgrade, it means that all these intermediary devices must support the new features. We know we can replace the router, but what about the other intermediary devices? Especially those larger devices? The cost of replacement is enormous.

Moreover, beyond intermediary devices, the operating system is also an important factor because the TCP protocol needs to be implemented through the operating system kernel, and operating system updates are also very lagging.

Thus, this issue is referred to as “intermediary device rigidity,” which is also an important reason for “protocol rigidity.” This is also a major reason limiting the update of the TCP protocol.

Therefore, in recent years, many new features of TCP standardized by the IETF have not been widely deployed or used due to a lack of broad support!

QUIC

So, the only way forward for HTTP/3.0 is to abandon TCP.

Thus, HTTP/3.0 implements the QUIC protocol (Quick UDP Internet Connections) based on UDP and the Diffie-Hellman algorithm.

The QUIC protocol has the following features:

Transport layer protocol based on UDP: It uses the UDP port number to identify a specific server on a specified machine.

Reliability: Although UDP is an unreliable transport protocol, QUIC has made some modifications based on UDP to provide reliability similar to TCP. It offers packet retransmission, congestion control, pacing, and other features present in TCP.

Realizes unordered, concurrent byte streams: A single QUIC data stream can guarantee ordered delivery, but multiple streams may be out of order, meaning that the delivery of a single data stream is sequential, but the order in which the receiver receives multiple streams may differ from the order in which the sender sent them!

Fast handshake: QUIC provides 0-RTT and 1-RTT connection establishment.

Uses TLS 1.3 transport layer security protocol: Compared to earlier versions of TLS, TLS 1.3 has many advantages, but the main reason for using it is that it requires fewer round trips during the handshake, thereby reducing protocol latency.

Obstacles

Above, we introduced many advantages of QUIC compared to TCP, and it can be said that this protocol is indeed superior to TCP.

Because it is based on UDP and does not change the UDP protocol itself, only some enhancements have been made. Although it can avoid the problem of intermediary device rigidity, there are still issues in promotion.

First, many companies, operators, and organizations intercept or limit UDP traffic outside of port 53 (DNS), as this traffic has recently been abused for attacks.

In particular, some existing UDP protocols and implementations are vulnerable to amplification attacks, where attackers can control innocent hosts to send large amounts of traffic to victims.

Therefore, the transmission of the UDP-based QUIC protocol may be blocked.

Additionally, because UDP has always been positioned as an unreliable connection, many intermediary devices do not have high support and optimization for it, so there is still a possibility of packet loss.

However, regardless of the situation, the era of HTTP/3.0 will certainly come, and the era of QUIC protocol fully replacing TCP will also come. Let us wait and see.

Leave a Comment