A Comprehensive Summary of TCP

A Comprehensive Summary of TCP

A Comprehensive Summary of TCP

/ Network Layered Structure /
Consider the simplest case: communication between two hosts. At this point, only a single network cable is needed to connect the two, specifying their hardware interfaces, such as both using USB, 10v voltage, and 2.4GHz frequency. This layer is the physical layer, and these specifications are the physical layer protocol.
A Comprehensive Summary of TCP
Of course, we are not satisfied with just connecting two computers, so we can use a switch to connect multiple computers, as shown below:
A Comprehensive Summary of TCP
This network configuration is called a local area network, also known as Ethernet (Ethernet is a type of local area network). In this network, we need to identify each machine so that we can specify which machine to communicate with. This identifier is the hardware address MAC. The hardware address is determined when the machine is manufactured, making it permanently unique. In a local area network, when we need to communicate with another machine, we only need to know its hardware address, and the switch will send our message to the corresponding machine.
Here we can abstract away the underlying network cable interface and create a new layer above the physical layer, which is the data link layer.
We still are not satisfied with the scale of the local area network and need to connect all local area networks together. At this point, we need to use routers to connect two local area networks:
A Comprehensive Summary of TCP
However, if we continue to use hardware addresses as the unique identifiers for communication objects, when the network scale becomes larger, it becomes impractical to remember all the machines’ hardware addresses. At the same time, a network object may frequently change devices, making the maintenance of the hardware address table even more complex. Here, a new address is used to mark a network object: IP address.
To understand IP addresses, let’s use a simple example of sending a letter.
I live in Beijing, and my friend A lives in Shanghai. I want to write a letter to my friend A:
  1. After writing the letter, I will write my friend A’s address on it and drop it off at the Beijing post office (attach the target IP address to the information and send it to the router).
  2. The post office will help me transport the letter to the local post office in Shanghai (the information will be routed to the router of the target IP local area network).
  3. The local router in Shanghai will help me deliver the letter to my friend A (communication within the local area network).
Therefore, the IP address is a network access address (my friend A’s address); I only need to know the target IP address, and the router can deliver the message to me. Within the local area network, we can dynamically maintain a mapping relationship between MAC addresses and IP addresses, allowing us to find the machine’s MAC address based on the destination IP address for sending.
Thus, we do not need to manage how to select machines at the lower level; we only need to know the IP address to communicate with our target. This layer is the network layer. The core function of the network layer is to provide logical communication between hosts. In this way, all hosts in the network are logically connected, and the upper layer only needs to provide the target IP address and data, and the network layer can send the message to the corresponding host.
A host can have multiple processes, and different network communications can occur between processes, such as playing games with friends while chatting with a girlfriend on WeChat. My phone can communicate with two different machines simultaneously. So when my phone receives data, how does it distinguish whether it is WeChat data or game data? In this case, we must add another layer above the network layer: the transport layer:
A Comprehensive Summary of TCP
The transport layer uses sockets to further split network information, allowing different application processes to independently make network requests without interfering with each other. This is the essence of the transport layer: to provide logical communication between processes. Here, the processes can be between hosts or within the same host; therefore, in Android, socket communication is also a way of inter-process communication.
Now that application processes on different machines can communicate independently, we can develop various types of applications on computer networks, such as web pages’ HTTP, file transfer FTP, etc. This layer is called the application layer.
The application layer can also be further divided into presentation layer and session layer, but their essential characteristics remain unchanged: to fulfill specific business requirements. Compared to the following four layers, they are not mandatory and can be categorized under the application layer.
Finally, a brief summary of the network layer:
A Comprehensive Summary of TCP
  1. The lowest level is the physical layer, responsible for direct communication between two machines through hardware;
  2. The data link layer uses hardware addresses for addressing within the local area network, achieving local area network communication;
  3. The network layer achieves logical communication between hosts through abstracting IP addresses;
  4. The transport layer, based on the network layer, splits data to achieve independent network communication for application processes;
  5. The application layer, based on the transport layer, develops various functions based on specific requirements.
It is important to note that layering is not a physical separation but a logical one. By encapsulating the underlying logic, upper-layer development can directly rely on lower-layer functions without concerning themselves with specific implementations, simplifying development.
This layered approach is also a design pattern known as the chain of responsibility, which encapsulates different responsibilities independently, making development and maintenance easier. The interceptor design pattern in okHttp is also a form of this chain of responsibility pattern.
/ Transport Layer /
This article mainly explains TCP, and here we need to add some knowledge about the transport layer.

Essence: Provide Process Communication

A Comprehensive Summary of TCP
In the transport layer, the network layer does not know which process the data packet belongs to; it only handles the reception and transmission of data packets. The transport layer is responsible for receiving data from different processes and delivering it to the network layer while splitting the network layer’s data for different processes. The aggregation of data from the top down to the network layer is called multiplexing, and the splitting from the bottom up is called demultiplexing.
The performance of the transport layer is limited by the network layer. This is easy to understand, as the network layer is the transport layer’s underlying support. Therefore, the transport layer cannot decide its bandwidth, delay, or other upper limits. However, more features can be developed based on the network layer, such as reliable transmission. The network layer is only responsible for trying to send data packets from one end to the other without guaranteeing that the data will arrive intact.

Underlying Implementation: Socket

Earlier, we mentioned that the simplest transport layer protocol is to provide independent communication between processes, but the underlying implementation is independent communication between sockets. In the network layer, the IP address is a logical address for a host, while in the transport layer, the socket is a logical address for a process; of course, a process can have multiple sockets. Application processes can listen to sockets to receive messages sent to those sockets.
A Comprehensive Summary of TCP
Sockets are not tangible objects but rather an abstraction created by the transport layer. The transport layer introduces the concept of ports to differentiate between different sockets. A port can be understood as a network communication port on a host; each port has a port number, and the number of ports is determined by the transport layer protocol.
Different transport layer protocols define sockets in different ways. In the UDP protocol, a socket is defined using the target IP and target port number; in TCP, it is defined using the target IP, target port number, source IP, and source port number. We only need to attach this information to the header of the transport layer message, and the target host will know which socket we want to send to, and the corresponding listening process for that socket will receive the information.

Transport Layer Protocols

The transport layer protocols are the well-known TCP and UDP. Among them, UDP is the most streamlined transport layer protocol, only implementing inter-process communication; while TCP, based on UDP, implements reliable transmission, flow control, congestion control, connection-oriented features, and is also more complex.
Of course, in addition to this, there are many other excellent transport layer protocols, but the most widely used ones are TCP and UDP. UDP will also be summarized later.
/ TCP Protocol Header /
The TCP protocol, when represented in messages, will attach a TCP header to the data transmitted from the application layer. This header contains TCP information, and let’s take a look at the overall structure of this header:
A Comprehensive Summary of TCP
This image is from my university teacher’s courseware, which is very useful, so I have kept it for study. The bottom part indicates the relationship between messages, where the TCP data portion is the data transmitted from the application layer.
The fixed length of the TCP header is 20 bytes, with an additional 4 bytes being optional. There is a lot of content, but some of the more familiar items include: source port, destination port. Huh? Isn’t it necessary to identify the socket using IP? The IP address is added at the network layer. Other details will be explained gradually later; as a summary article, here is a reference table for review:
A Comprehensive Summary of TCP
A Comprehensive Summary of TCP
The options field contains the following other options:
A Comprehensive Summary of TCP
After discussing the content below, we will become familiar with these fields.
/ Byte Stream Feature of TCP /
TCP does not simply attach the header to the data transmitted from the application layer and send it to the target; instead, it treats the data as a byte stream, numbering them, and sending them in parts. This is the byte stream-oriented feature of TCP:
A Comprehensive Summary of TCP
  • TCP reads data from the application layer in a stream format and stores it in its sending buffer while numbering these bytes.
  • TCP selects an appropriate amount of bytes from the sending buffer to form a TCP message, sending it through the network layer to the target.
  • The target reads the bytes and stores them in its receiving buffer, delivering them to the application layer at the appropriate time.
The benefit of being byte stream-oriented is that it does not require storing excessively large data at once, which would occupy too much memory. The downside is that it cannot discern the meaning of these bytes. For example, if the application layer sends an audio file and a text file, to TCP, they are just a stream of bytes with no meaning, which can lead to packet sticking and unpacking issues, which will be discussed later.
/ Principles of Reliable Transmission /
As mentioned earlier, TCP is a reliable transmission protocol, meaning that once data is handed to it, it will definitely be sent to the target address accurately and completely, unless the network fails. The network model it implements is as follows:
A Comprehensive Summary of TCP
For the application layer, it serves as a reliable transmission support service; while the transport layer relies on the unreliable transmission of the network layer. Although protocols can be used at the network layer and even the data link layer to ensure data transmission reliability, designing the network this way would complicate matters and decrease efficiency. Placing the reliability guarantee for data transmission at the transport layer is more appropriate.
Key points of the reliable transmission principle include: sliding window, timeout retransmission, cumulative acknowledgment, selective acknowledgment, continuous ARQ.

Stop-and-Wait Protocol

The simplest method to achieve reliable transmission is: I send a data packet to you, and then you reply to me that you received it, and I continue to send the next data packet. The transmission model is as follows:
A Comprehensive Summary of TCP
This “one request, one reply” method to ensure reliable transmission is called the stop-and-wait protocol. Do you remember the ack field in the TCP header? When it is set to 1, it indicates that this message is a confirmation receipt.
Now consider a situation: packet loss. The unreliable network environment causes each data packet sent to potentially be lost; if machine A sends a data packet that gets lost, then machine B will never receive it, and machine A will be left waiting indefinitely. The solution to this problem is: timeout retransmission. When machine A sends a data packet, it starts a timer; if the timer expires without receiving a confirmation reply, it can be assumed that a packet loss has occurred, and it will resend the packet.
However, retransmission can lead to another problem: if the original data packet was not lost but simply took longer to arrive over the network, machine B will receive two data packets. How does machine B distinguish whether these two data packets belong to the same data or different data? This requires the previously discussed method: numbering the data bytes. This way, the receiver can determine whether the data is new or a retransmission based on the byte numbering.
In the TCP header, there are two fields: sequence number and acknowledgment number, which represent the number of the first byte of data sent by the sender and the number of the first byte of data expected by the receiver. Although TCP is byte stream-oriented, it does not send one byte at a time; instead, it extracts a whole segment. The length of the extracted segment is influenced by various factors, such as the size of the buffer and the frame size limitations of the data link layer.

Continuous ARQ Protocol

The stop-and-wait protocol can meet the requirements for reliable transmission, but it has a fatal flaw: low efficiency. After sending a data packet, the sender enters a waiting state, during which no action is taken, wasting resources. The solution is to continuously send data packets. The model is as follows:
A Comprehensive Summary of TCP
The main difference from the stop-and-wait protocol is that it continuously sends data, while the receiver continuously receives data and acknowledges each one. This significantly improves efficiency. However, it also introduces some additional problems:
Can the sender send indefinitely until all data in the buffer is sent? No, because it must consider the receiver’s buffer and its ability to read data. If the sender sends too quickly, causing the receiver to be unable to accept it, it will only lead to frequent retransmissions, wasting network resources. Therefore, the range of data that the sender can send must consider the receiver’s buffer situation. This is the flow control of TCP. The solution is the sliding window. The basic model is as follows:
A Comprehensive Summary of TCP
  • The sender must set its own send window size based on the receiver’s buffer size; data within the window can be sent, while data outside cannot.
  • When the data within the window receives an acknowledgment reply, the entire window moves forward until all data is sent.
In the TCP header, there is a window size field that indicates the remaining buffer size of the receiver, allowing the sender to adjust its sending window size. Through the sliding window, TCP’s flow control can be achieved, preventing sending too quickly and causing excessive data loss.
Continuous ARQ introduces a second problem: the network is filled with acknowledgment messages equal to the amount of data packets being sent, because each data packet sent must have a corresponding acknowledgment. A method to improve network efficiency is cumulative acknowledgment. The receiver does not need to reply to each packet individually; instead, it can accumulate a certain amount of data packets and inform the sender that all data before this packet has been received. For example, if it receives packets 1, 2, 3, and 5, the receiver only needs to tell the sender that it received 5, and the sender knows that 1, 2, and 3 have all been received.
The third problem is: how to handle packet loss. In the stop-and-wait protocol, this is simple; just use a timeout retransmission. However, in continuous ARQ, it is not so straightforward. For example, if the receiver receives packets 1, 2, 3, and 5, with packet 4 lost. Following the cumulative acknowledgment approach, it can only send back an acknowledgment for 3, and packets 5 and 6 must be discarded because the sender will retransmit. This is the GBN (Go-Back-N) approach.
However, we find that only packet 4 needs to be retransmitted, which is a waste of resources. Therefore, we have the selective acknowledgment (SACK). In the options field of the TCP message, it can specify the segments that have already been received; each segment needs two boundaries for identification. This way, the sender can retransmit only the lost data.

Summary of Reliable Transmission

At this point, the principles of TCP’s reliable transmission have been introduced adequately. Let’s summarize:
  • Through the continuous ARQ protocol and the send-acknowledgment reply model, ensure that each data packet reaches the receiver.
  • By numbering bytes, mark whether each data packet is new or a retransmission.
  • Utilize timeout retransmission to address packet loss in the network.
  • Implement flow control through the sliding window.
  • Enhance acknowledgment and retransmission efficiency through cumulative acknowledgment and selective acknowledgment.
Of course, this is just the tip of the iceberg regarding reliable transmission. If interested, further research can be conducted (almost like chatting with an interviewer [dog head]).
/ Congestion Control /
Congestion control addresses another issue: avoiding excessive network congestion leading to severe packet loss and reduced network efficiency.
Taking real-world traffic as an example:
The number of cars that can pass on a highway at the same time is limited; during holidays, severe traffic jams can occur. In TCP, if data packets time out, they are retransmitted, which leads to more cars entering the road, resulting in more congestion, ultimately leading to: packet loss – retransmission – packet loss – retransmission. Eventually, the entire network collapses.
It is important to note that congestion control and flow control are not the same; flow control is a means of congestion control: to avoid congestion, flow must be controlled. The purpose of congestion control is to limit the amount of data sent by each host to prevent network congestion and efficiency decline. It is similar to restricting vehicle movement based on license plate numbers in cities like Guangzhou; otherwise, everyone will be stuck in traffic.
Key points of congestion control include: slow start, fast recovery, fast retransmission, and congestion avoidance. Here, we once again present a PPT image from my university teacher:
A Comprehensive Summary of TCP
The Y-axis represents the sender’s window size, while the X-axis represents the number of rounds of sending (not byte numbers).
  • Initially, the window is set to a small value, then doubles with each round. This is the slow start.
  • When the window value reaches the ssthresh value, which is a window limit that needs to be set based on real-time network conditions, it enters congestion avoidance, increasing the window value by 1 each round to gradually probe the network’s limits.
  • If data times out, it indicates a high likelihood of congestion, and the process returns to slow start, repeating the previous steps.
  • If three identical acknowledgment replies are received, it indicates that the network condition is not good; the ssthresh value is set to half of its original value, continuing with congestion avoidance. This part is called fast recovery.
  • If packet loss information is received, the lost packets should be retransmitted promptly; this is fast retransmission.
  • Of course, the upper limit of the window cannot increase indefinitely; it cannot exceed the size of the receiver’s buffer.
Through this algorithm, we can greatly avoid network congestion.
Additionally, routers can inform the sender when they are about to be full rather than waiting for a timeout to handle the situation; this is called Active Queue Management (AQM). There are many other methods, but the above algorithms are key.
/ Connection-Oriented /
This section discusses the well-known TCP three-way handshake and four-way handshake. Given the previous content, this section should be easy to understand.
TCP is connection-oriented, but what is a connection? The connection here is not a physical connection but a record of communication between both parties. TCP is a full-duplex communication protocol, meaning that both parties can send data to each other, so both need to record each other’s information. According to the principles of reliable transmission discussed earlier, both parties in TCP communication need to prepare a receiving buffer to accept each other’s data, remember each other’s sockets to know how to send data, and remember each other’s buffer sizes to adjust their own window sizes, etc. These records constitute a connection.
In the transport layer section, it was mentioned that the communication address between transport layer parties is defined using sockets, and TCP is no exception. Each TCP connection can only have two objects, meaning two sockets, and cannot have three. Therefore, the definition of a socket needs four key factors: source IP, source port number, destination IP, and destination port number, to avoid confusion.
If TCP were to define sockets using only the target IP and target port number like UDP, multiple senders could simultaneously send to the same target socket. In this case, TCP would not be able to distinguish whether the data came from different senders, leading to errors.
Since it is a connection, there are two critical points: establishing a connection and disconnecting a connection.

Establishing a Connection

The purpose of establishing a connection is to exchange information and remember each other’s details. Therefore, both parties need to send their information to each other:
A Comprehensive Summary of TCP
However, the principles of reliable transmission tell us that data transmission over the network is unreliable; we need a confirmation reply from the other party to ensure the message has been accurately received. The following diagram illustrates this:
A Comprehensive Summary of TCP
Machine B’s acknowledgment and the information sent from machine B to machine A can be merged to reduce the number of messages; moreover, the message sent from machine B to machine A itself indicates that machine B has already received the message. Thus, the final example diagram is:
A Comprehensive Summary of TCP
The steps are as follows:
  1. Machine A sends a SYN packet to machine B requesting to establish a TCP connection, attaching its receiving buffer information, etc.; machine A enters the SYN_SEND state, indicating that the request has been sent and is waiting for a reply.
  2. Machine B receives the request, records machine A’s information, creates its own receiving buffer, and sends a combined SYN+ACK packet back to machine A, while entering the SYN_RECV state, indicating that it is ready and waiting for machine A’s reply to send data to A.
  3. Machine A receives the reply, records machine B’s information, sends an ACK message, and enters the ESTABLISHED state, indicating that it is fully prepared to send and receive.
  4. Machine B receives the ACK data and enters the ESTABLISHED state.
The three messages exchanged are called the three-way handshake.

Disconnecting a Connection

Disconnecting a connection is similar to the three-way handshake; let’s go straight to the diagram:
A Comprehensive Summary of TCP
1. After machine A finishes sending data, it requests machine B to disconnect, entering the FIN_WAIT_1 state, indicating that data has been sent and a FIN packet has been sent (FIN flag is set to 1);
2. Machine B receives the FIN packet and replies with an ACK packet indicating receipt, but machine B may still have data that has not been sent, entering the CLOSE_WAIT state, indicating that the other party has finished sending and has requested to close the connection; after it finishes sending, it can close the connection;
3. After machine B finishes sending data, it sends a FIN packet to machine A, entering the LAST_ACK state, indicating that it is waiting for an ACK packet to close the connection;
4. Machine A receives the FIN packet and knows that machine B has also finished sending, replying with an ACK packet, and enters the TIME_WAIT state.
The TIME_WAIT state is special. When machine A receives machine B’s FIN packet, ideally, it could close the connection immediately; however:
  1. We know that the network is unstable; machine B may have sent some data that has not yet arrived (slower than the FIN packet);
  2. At the same time, the replying ACK packet may be lost, and machine B will retransmit the FIN packet;
If machine A closes the connection immediately, it may lead to incomplete data and machine B being unable to release the connection. Therefore, machine A needs to wait for 2 Maximum Segment Lifetime (MSL) to ensure that no residual packets are left in the network before closing the connection.
5. Finally, after waiting for two MSLs, machine B receives the ACK packet, and both close the connection, entering the CLOSED state.
The process of four messages exchanged to disconnect the connection is called the four-way handshake.
Now, questions about why the handshake is three times and the disconnection is four times, whether it must be three or four, and why to wait for 2 MSLs before closing the connection, etc., are all resolved.
/ UDP Protocol /
In addition to TCP, the transport layer protocol also includes the well-known UDP. If TCP stands out for its complete and stable features, UDP is the minimalist approach that throws punches randomly.
UDP implements only the minimal functions of the transport layer: inter-process communication. For the data transmitted from the application layer, UDP simply adds a header and passes it directly to the network layer. The UDP header is very simple, containing only three parts:
  • Source port, destination port: port numbers used to distinguish different processes on the host.
  • Checksum: used to verify that the data packet has not been corrupted during transmission, for example, if a 1 has become a 0.
  • Length: the length of the message.
Thus, UDP’s functions are limited to two: verifying whether the data packet has errors and distinguishing between different process communications.
However, while TCP has many features, it also comes at a corresponding cost. For example, the connection-oriented feature incurs overhead during connection establishment and disconnection; congestion control features limit transmission upper limits, etc. Below are the pros and cons of UDP:

Disadvantages of UDP

  • Cannot guarantee that messages are complete or arrive correctly; UDP is an unreliable transport protocol;
  • Lacks congestion control, which can lead to resource competition and network system crashes.

Advantages of UDP

  • Faster efficiency; no need to establish connections or control congestion.
  • Can connect to more clients; no connection state, no need to create buffers for each client, etc.
  • Smaller packet header size, leading to lower overhead; the fixed header size of TCP is 20 bytes, while UDP is only 8 bytes; a smaller header means a larger proportion of data.
  • In scenarios where high efficiency is needed and some error allowance is acceptable, it can be used. For example, in live broadcasting, it is not necessary for every data packet to arrive intact, and a certain packet loss rate is acceptable. In this case, TCP’s reliable features become a burden; the streamlined UDP is a more suitable choice for higher efficiency.
  • Can perform broadcasting; UDP is not connection-oriented, so it can send messages to multiple processes simultaneously.

Applicable Scenarios for UDP

UDP is suitable for scenarios where the transport model requires high customization at the application layer, allows packet loss, needs high efficiency, and requires broadcasting, such as:
  • Video live streaming.
  • DNS.
  • RIP routing protocol.
/ Other Supplements /

Chunked Transfer

We can see that the transport layer does not send the entire data packet with a header directly; instead, it splits it into multiple packets for separate transmission. Why does it do this?
Some readers might think: the data link layer limits the data length to only 1460. Why does the data link layer impose such a restriction? The fundamental reason is that networks are unstable. If a packet is too long, it is highly likely to suddenly interrupt during transmission, necessitating the retransmission of the entire data, which reduces efficiency. By splitting data into multiple packets, if one packet is lost, only that packet needs to be retransmitted.
But is it better to split into smaller chunks? If the length of the data field in the packet is too low, it will increase the proportion of the header, making the header the largest burden in network transmission. For example, if the total length is 1000 bytes and each packet header is 40 bytes, splitting it into 10 packets means only 400 bytes of header need to be transmitted. However, if split into 1000 packets, it would require 40000 bytes of header, significantly reducing efficiency.

Routing Conversion

Let’s look at the diagram:
A Comprehensive Summary of TCP
  • Under normal circumstances, the data packet from host A can be transmitted via paths 1-3-6-7.
  • If router 3 breaks down, it can be transmitted via 1-4-6-7.
  • If router 4 also breaks down, it can only be transmitted via 2-5-6-7.
  • If router 5 breaks down, the connection will be interrupted.
As can be seen, using routing forwarding improves network fault tolerance; the fundamental reason remains that networks are unstable. Even if several routers break down, the network can still function. However, if core router 6 fails, it directly leads to a loss of communication between host A and host B, so it is important to avoid such core routers.
Using routers also has the benefit of load balancing. If one line is too congested, it can be transmitted via another route, improving efficiency.

Packet Sticking and Unpacking

In the byte stream-oriented section, we discussed that TCP does not understand the meaning of these data streams; it only knows to take data streams from the application layer, segment them into packets, and send them to the target. If the application layer transmits two data packets, it is highly likely that the following situation will occur:
A Comprehensive Summary of TCP
  • The application layer needs to send two pieces of data to the target process: one audio and one text.
  • TCP only knows that it has received a stream and splits it into four segments for sending.
  • The data in the second packet may contain mixed data from both files, which is called packet sticking.
  • The application layer of the target process needs to unpack this data to separate the two files correctly, which is called unpacking.
Both packet sticking and unpacking are issues that the application layer needs to address, which can be done by appending special bytes (such as newline characters) at the end of each file or controlling each packet to contain only one file’s data, padding with zeros if necessary.

Malicious Attacks

TCP’s connection-oriented features can be exploited by malicious individuals to attack servers.
As we know, when we send a SYN packet to a host to request connection creation, the server creates buffers for us and then returns a SYN+ACK message. If we forge IP and port, sending massive requests to a server can cause it to create a large number of half-open TCP connections, rendering it unable to respond to user requests, leading to server paralysis.
Solutions can include limiting the number of connections per IP, allowing half-open TCP connections to close in a shorter time, delaying memory allocation for the receiving buffer, etc.

Long Connections

Every time we request the server, we need to create a TCP connection, which is closed after the server returns data. If there are a large number of requests in a short time, frequently creating and closing TCP connections is resource-intensive. Therefore, we can keep the TCP connection open during this period for requests, improving efficiency.
It is important to consider the maintenance time and creation conditions for long connections to avoid malicious exploitation that creates a large number of long connections, exhausting server resources.
/ Finally /
In the past, when I was learning, I felt that this knowledge seemed pointless, seemingly only useful for exams. In fact, when not applied, it is difficult to have a deeper understanding of this knowledge. For example, now when I look at the above summary, many things are only superficial knowledge, and I do not know the true meaning behind them.
However, as I learn more broadly and deeply, I will have an increasingly profound understanding of this knowledge. There are moments when I think: oh, so that is how it works, that is how it is applied, and learning is indeed useful.
Now, perhaps after learning, there is no immediate feeling, but when applied or learned in related applications, there will be a moment of enlightenment, and I will gain a lot.

Source: https://juejin.cn/user/3931509313252552/posts

Reposting is solely for the dissemination of knowledge,
If there is any infringement, please contact the editor for deletion
—— The End ——

A Comprehensive Summary of TCP

Leave a Comment