What Went Wrong with TCP in HTTP 3.0?

Since HTTP/1.0, up to HTTP/2, regardless of how the application layer protocol has improved, TCP has always been the foundation of the HTTP protocol, mainly because it provides a reliable connection.

However, starting from HTTP 3.0, this situation has changed.

Because, in the newly released HTTP 3.0, the TCP protocol has been completely abandoned.

TCP Head-of-Line Blocking

We know that during TCP transmission, data is split into **ordered** packets, which are transmitted over the network to the receiving end, where they are then reassembled in **order** into the original data, thus completing the data transmission.

However, if any one of these packets does not arrive in order, the receiving end will keep the connection open, waiting for the packet to return, and this will block subsequent requests. This is known as **TCP head-of-line blocking.**

HTTP/1.1’s pipelined persistent connections allow multiple HTTP requests to use the same TCP connection, but HTTP/1.1 specifies that a domain can have six TCP connections. In HTTP/2, however, only one TCP connection is used for the same domain.

Thus, in HTTP/2, the impact of TCP head-of-line blocking is more significant because the multiplexing technology of HTTP/2 means that multiple requests are actually based on the same TCP connection. If one request causes TCP head-of-line blocking, then multiple requests will be affected.

TCP Handshake Duration

We all know that the reliable connection of TCP is achieved through three-way handshakes and four-way teardowns. However, the problem is that the three-way handshake takes time.

The TCP three-way handshake process requires three interactions between the client and server, which means an additional consumption of 1.5 RTT.

> RTT: Round Trip Time. It refers to the time taken for a request to send a data packet from the client browser to the server and then receive a response packet from the server. RTT is an important indicator of network performance.

In cases where the client and server are far apart, if one RTT reaches 300-400ms, then the handshake process will seem very “slow”.

Upgrading TCP

Based on the two issues we mentioned above, some have suggested: since TCP has these problems, and we know these problems exist, and even the solutions are not difficult to conceive, why can’t we upgrade the protocol itself to solve these problems?

Actually, this involves a problem of “protocol rigidity“.

To explain, when we browse data on the Internet, the process of data transmission is extremely complex.

We know that to use the internet at home, there are a few prerequisites. First, we need to activate the internet through a service provider and use a router, which is an intermediary device in the data transmission process.

Intermediary devices refer to auxiliary devices inserted between the data terminal and signal conversion devices that perform some additional functions before modulation or after demodulation. For example, hubs, switches, wireless access points, routers, security demodulators, communication servers, etc., are all intermediary devices.

In places we cannot see, there are many such intermediary devices, and a network needs to pass through countless intermediary devices to reach the end user.

If the TCP protocol needs to be upgraded, it means that all these intermediary devices must support the new features. We know we can replace routers, but what about those other intermediary devices? Especially the larger ones? The cost of replacement is enormous.

Moreover, in addition to intermediary devices, the operating system is also an important factor, as the TCP protocol needs to be implemented through the operating system kernel, and OS updates are often very slow.

Thus, this problem is referred to as “intermediary device rigidity,” which is also a significant reason for “protocol rigidity.” This is an important reason limiting the update of the TCP protocol.

Therefore, in recent years, many new features of TCP standardized by the IETF have not been widely deployed or used due to a lack of broad support!

QUIC

Thus, the only option facing HTTP/3.0 is to abandon TCP.

Consequently, HTTP/3.0 implements the QUIC protocol (Quick UDP Internet Connections) based on UDP and the Diffie-Hellman algorithm.

QUIC protocol has the following characteristics:

Transport layer protocol based on UDP: It uses the UDP port number to identify specific servers on a designated machine.

Reliability: Although UDP is an unreliable transport protocol, QUIC has made some modifications to UDP to provide reliability similar to TCP. It offers packet retransmission, congestion control, adjusts transmission pacing, and other features found in TCP.

Realizes unordered, concurrent byte streams: A single data stream of QUIC can ensure ordered delivery, but multiple data streams may be out of order, meaning that the transmission of a single data stream is in order, but the order in which the receiver receives multiple streams may differ from the order in which the sender sent them!

Fast handshake: QUIC provides 0-RTT and 1-RTT connection establishment.

Uses TLS 1.3 transport layer security protocol: Compared to earlier versions of TLS, TLS 1.3 has many advantages, but the main reason for using it is that it requires fewer round trips during the handshake, thereby reducing protocol latency.

Obstacles

Above, we introduced many advantages of QUIC compared to TCP, and it can be said that this protocol is indeed superior to TCP in some respects.

Because it is based on UDP, it does not change the UDP protocol itself, but only makes some enhancements. Although it can avoid the problem of intermediary device rigidity, there are still issues with its promotion.

First, many enterprises, operators, and organizations intercept or throttle UDP traffic outside of port 53 (DNS) because this traffic has recently been misused for attacks.

Especially since some existing UDP protocols and implementations are vulnerable to amplification attacks, where an attacker can control innocent hosts to send large amounts of traffic to the victim.

Therefore, the transmission of the UDP-based QUIC protocol may be subject to blocking.

Additionally, since UDP has always been positioned as an unreliable connection, many intermediary devices do not provide high levels of support and optimization for it, so there is still a possibility of packet loss.

However, no matter what, the era of HTTP/3.0 will definitely come, and the era of QUIC protocol fully replacing TCP will also arrive. Let us wait and see.

- EOF -
推荐阅读  点击标题可跳转
SpringBoot 生产中 16 条最佳实践

Java 8 Stream 数据流效率分析

Tomcat太重,Undertow轻量多了~


看完本文有收获?请转发分享给更多人

关注「ImportNew」,提升Java技能

点赞和在看就是最大的支持❤️


Leave a Comment