Better understand how each version of HTTP works.
In the early 1990s, Tim Berners-Lee and his team at CERN worked together to define four fundamental components of the internet:
-
Hypertext document format (HTML)
-
Data transfer protocol (HTTP)
-
A web browser for viewing hypertext (the first browser, WorldWideWeb)
-
A server for transmitting data (an early version of httpd)
HTTP transmits data by reusing the existing TCP/IP protocol, where HTTP message bytes reside in the application layer, shown in light blue in the image below.

HTTP/0.9
This was the first draft of HTTP. The only method available was GET; there were no headers or status codes; the only data format available was HTML. Just like in HTTP/1.0 and HTTP/1.1, HTTP messages were structured in ASCII text.
Example of an HTTP/0.9 request:
GET /mypage.html
Example of a response:
<html> A very simple HTML page</html>
HTTP/1.0
This version brought the current structure to HTTP, similar to a memorandum, and introduced new methods (HEAD and POST), MIME types, status codes, and protocol versions.
Example of an HTTP/1.0 request:
GET /mypage.html HTTP/1.0User-Agent: NCSA_Mosaic/2.0 (Windows 3.1)
Example of a response:
200 OKDate: Tue, 15 Nov 1994 08:12:31 GMTServer: CERN/3.0 libwww/2.17Content-Type: text/html
<HTML>A page with an image <IMG SRC="/myimage.gif"></HTML>HTTP/1.1
This version was proposed in early 1997, a few months later than its predecessor. The main changes included:
-
Persistent TCP connections (keep-alive), saving machine and network resources. In the previous version, a new TCP connection was opened for each request and closed after the response.
-
Host header, allowing multiple servers under the same IP.
-
Header conventions for encoding, caching, language, and MIME types.
HTTP/1.1
Example of a request:
GET /api/fruit/orange HTTP/1.1Host: www.fruityvice.comAccept-Encoding: gzip, deflate, br
Example of a response:
HTTP/1.1 200 OKServer: nginx/1.16.1Date: Sun, 10 Mar 2024 20:44:25 GMTTransfer-Encoding: chunkedConnection: keep-aliveX-Content-Type-Options: nosniffX-XSS-Protection: 1; mode=blockCache-Control: no-store, must-revalidate, no-cache, max-age=0Pragma: no-cacheX-Frame-Options: DENYContent-Type: application/jsonExpires: 0
{"name":"橙子","id":2,"family":"芸香科","order":"无患子目","genus":"柑橘属","nutritions":{"calories":43,"fat":0.2,"sugar":8.2,"carbohydrates":8.3,"protein":1.0}}

HTTP/2
In 2015, after years of observation and research on internet performance, HTTP/2 was proposed and created, based on Google’s SPDY.
Its differences include: multiplexing many messages into a single TCP packet; binary format for messages; and HPACK compression for headers.
In HTTP/1.1, two requests could not share the same TCP connection – subsequent requests had to wait until the first request was completed. This is known as head-of-line blocking. In the image below, considering only one TCP connection, request 2 cannot be sent until response 1 arrives.
With HTTP/2, this issue is resolved through streams, where each stream corresponds to a message. Many streams can interleave within a single TCP packet. If a stream cannot send its data for some reason, other streams can take its place in the TCP packet.

HTTP/2 streams are divided into frames, each frame contains: frame type, associated stream, and byte length. In the image below, a colored rectangle represents a TCP packet, and ✉ is one of the HTTP/2 frames. The first and third TCP packets carry frames from different streams.

The image below shows how frames exist within TCP packets. Stream 1 carries an HTTP response for a JavaScript file, while stream 2 carries an HTTP response for a CSS file.

HTTP/3
HTTP/3 is born from a new transport protocol called QUIC, created by Google in 2012. QUIC is encapsulated in UDP and proposes:
-
Fewer round trips for connection establishment and TLS authentication;
-
Greater resilience in handling packet loss;
-
Resolution of head-of-line blocking issues present in TCP and TLS.
HTTP/2 resolved the head-of-line blocking issue in HTTP, but this problem also exists in TCP and TLS. TCP assumes that the data it needs to send is a continuous sequence of packets, and if any packet is lost, it must be resent to maintain the integrity of the information. With TCP, subsequent packets cannot be sent until the lost packet is successfully resent to the destination.
The image below visually explains how this occurs in HTTP/2. The second packet only contains a frame for response 1, but its loss delays both response 1 and response 2 – meaning there is no parallelism in this case.

To address the head-of-line blocking issue in TCP, QUIC decided to use UDP as its transport protocol because UDP does not care about delivery guarantees. In QUIC, the responsibility for data integrity shifts to the application layer, allowing data message frames to arrive out of order without blocking unrelated streams.


Head-of-line blocking issues related to TCP (SSL) occur on TCP because encryption is typically applied to the entire message content, meaning all data (all packets) must be received before decryption can occur. With QUIC, encryption is applied to each QUIC packet, allowing it to be decrypted upon arrival without needing to receive all packets in advance.
Using TCP with TLS:
-
Input data: `A+B+C`
-
Encrypted data: `crypt(A+B+C) = D+E+F`
-
Packets: `D, E, F`
-
Received: `decrypt(D+E+F)`
-
`A+B+C`
Using QUIC with TLS:
-
Input data: `A+B+C`
-
Encrypted data: `crypt(A) = X, crypt(B) = Y, crypt(C) = Z`
-
Packets: `X, Y, Z`
-
Received: `decrypt(X) + decrypt(Y) + decrypt(Z)`
-
`A+B+C`


* TLS 1.2 requires 2 round trips for the encryption handshake, while TLS 1.3 only requires 1 round trip and can optionally use 0-RTT (zero round trip time resumption) without a prior handshake. However, 0-RTT can lead to replay attacks, making it insecure.
** According to a study, QUIC’s connection ID can be used for fingerprinting, posing a risk to user privacy.
Which version is better?
Currently, the best two versions are HTTP/2 and HTTP/3.
HTTP/3 is designed for unstable connections, such as mobile and satellite networks. To combat network instability, data streams between QUIC have a high degree of independence, and it is resilient to packet loss. However, HTTP/3 has performance penalties, mainly because 1) due to the low usage of UDP over the past decades, routers and operating systems have not optimized for the UDP protocol, making it relatively slower than TCP; and 2) the frame-based encryption used by QUIC requires more mathematical operations, making it less efficient than the whole message encryption used in TCP. Additionally, the UDP protocol is restricted in some networks to prevent attacks, such as UDP flood attacks and DNS amplification attacks.
On reliable and stable connections, HTTP/2 often provides better performance than HTTP/3.
In general, it is recommended to conduct compatibility and performance testing to determine which version is most suitable; additionally, servers can accept both HTTP/2 and HTTP/3 connections, allowing the client to decide which version to use.
Author: Alexandre Source: alexandrehtrb.github.io
References
-
MDN – Evolution of HTTP
-
MDN – Connection management in HTTP/1.x
-
David Wills – OSI reference model
-
Web Performance Calendar – Head-of-Line Blocking in QUIC and HTTP/3: The Details (WebArchive) (recommended reading)
-
Wikipedia – QUIC
-
Cloudflare – Introducing Zero Round Trip Time Resumption (0-RTT)
-
HTTP/3 explained – QUIC connections
-
Erik Sy*, Christian Burkert, Hannes Federrath, and Mathias Fischer – A QUIC Look at Web Tracking