What Did TCP Do Wrong in Abandoning HTTP 3.0?

Since HTTP/1.0, up to HTTP/2, TCP has always been the foundation of the HTTP protocol, mainly because it provides a reliable connection.

However, starting from HTTP 3.0, this situation has changed.

Because, in the newly launched HTTP 3.0, the TCP protocol has been completely abandoned.

TCP Head-of-Line Blocking

We know that during TCP transmission, data is split into **ordered** packets, which are transmitted over the network to the receiver, and the receiver then reassembles these packets in **order** to complete the data transmission.

However, if one of these packets does not arrive in order, the receiver will keep the connection open, waiting for the packet to return, which will block subsequent requests. This is known as **TCP head-of-line blocking.**

The pipelined persistent connection of HTTP/1.1 allows multiple HTTP requests to use the same TCP connection, but HTTP/1.1 specifies that a domain can have six TCP connections. In HTTP/2, only one TCP connection is used for the same domain.

Therefore, in HTTP/2, the impact of TCP head-of-line blocking is greater because the multiplexing technology of HTTP/2 means that multiple requests are based on the same TCP connection; if one request causes TCP head-of-line blocking, then multiple requests will be affected.

TCP Handshake Duration

We all know that TCP’s reliable connection is achieved through three-way handshakes and four-way teardowns. However, the problem is that the three-way handshake takes time.

The process of the TCP three-way handshake requires three interactions between the client and the server, which means an additional consumption of 1.5 RTT.

> RTT: Network latency (Round Trip Time). It refers to the time taken for a request to send a data packet from the client browser to the server and then receive a response packet from the server. RTT is an important indicator of network performance.

In cases where the client and server are relatively far apart, if one RTT reaches 300-400ms, then the handshake process will seem very “slow”.

Upgrading TCP

Based on the two issues we mentioned above, some people have suggested: since TCP has these problems, and we know these problems exist, even the solutions are not hard to think of, why not upgrade the protocol itself to solve these problems?

Actually, this involves a problem of “protocol rigidity“.

To put it simply, when we browse data on the Internet, the data transmission process is extremely complex.

As we know, to use the internet at home, there are several prerequisites: first, we need to activate the network through an ISP and use a router, which is an intermediate device in the data transmission process.

Intermediate devices refer to auxiliary devices that are inserted between data terminals and signal conversion devices to perform certain additional functions before modulation or after demodulation. For example, hubs, switches, wireless access points, routers, security demodulators, communication servers, etc., are all intermediate devices.

In places we cannot see, there are many such intermediate devices, and a network needs to pass through countless intermediate devices before reaching the end user.

If the TCP protocol needs to be upgraded, it means that all these intermediate devices must support the new features. We know we can replace routers, but what about those other intermediate devices? Especially those larger devices? The cost of replacement is enormous.

Moreover, aside from intermediate devices, the operating system is also an important factor because the TCP protocol needs to be implemented through the operating system kernel, and updates to operating systems are very slow.

Thus, this issue is referred to as “intermediate device rigidity,” which is also an important reason for “protocol rigidity.” This is a significant limitation on the updating of the TCP protocol.

Therefore, in recent years, many new features of TCP standardized by IETF have not been widely deployed or used due to a lack of widespread support!

QUIC

Thus, HTTP/3.0 has only one path ahead: to abandon TCP.

As a result, HTTP/3.0 implements the QUIC protocol (Quick UDP Internet Connections) based on UDP + Diffie-Hellman algorithm.

QUIC protocol has the following characteristics:

Transport layer protocol based on UDP: It uses UDP port numbers to identify specific servers on designated machines.

Reliability: Although UDP is an unreliable transport protocol, QUIC makes some modifications based on UDP to provide reliability similar to TCP. It offers packet retransmission, congestion control, pacing, and other features found in TCP.

Realizes unordered, concurrent byte streams: A single data stream of QUIC can guarantee ordered delivery, but multiple data streams may be unordered, meaning that the delivery of a single data stream is in order, but the order in which the receiver receives multiple data streams may differ from the order in which the sender sent them!

Fast handshake: QUIC provides 0-RTT and 1-RTT connection establishment.

Uses TLS 1.3 transport layer security protocol: Compared to earlier versions of TLS, TLS 1.3 has many advantages, but the main reason for using it is that it requires fewer round trips for the handshake, which reduces protocol latency.

Obstacles

Above, we have introduced many advantages of QUIC compared to TCP, and it can be said that this protocol is indeed superior to TCP in some respects.

Because it is based on UDP and does not change the UDP protocol itself, it just enhances it. Although it can avoid the problem of intermediate device rigidity, there are still some issues in promotion.

First, many enterprises, operators, and organizations intercept or throttle UDP traffic other than port 53 (DNS) because this traffic has recently been abused for attacks.

Especially some existing UDP protocols and implementations are vulnerable to amplification attack threats, where attackers can control innocent hosts to send a large amount of traffic to victims.

Therefore, the transmission of the UDP-based QUIC protocol may be blocked.

Moreover, since UDP has always been positioned as an unreliable connection, many intermediate devices do not support or optimize it well, so there is still a possibility of packet loss.

However, regardless of the challenges, the era of HTTP/3.0 will definitely come, and the era of QUIC protocol fully replacing TCP will also arrive. Let us wait and see.

• Interviewer: Talk about Java generic wildcards T, E, K, V? • Alibaba First Interview: How to ensure API data security? • 0.2 seconds to copy 100G files? • Java 17, the fastest JDK ever! Recent writing of a 6000-page Java learning manual, and treasured four must-read Java books, shared on Zhihu with 30,000 likes! Each article is rich in images and text, with source code attached. There is also a PDF collection. If you want to get the complete PDF, you can obtain it by the following method: Scan the QR code below to follow and receive it. Reply with the keyword 002 in the background. See you tomorrow (。・ω・。)

Leave a Comment