Refuting Mu Chengjin’s Claim That TCP/IP Is the Root Cause of Internet Security Threats

Recently, Mu Chengjin wrote that “the recently released Treck company’s TCP/IP protocol stack vulnerability proves that the Internet’s TCP/IP protocol is the fundamental cause of Internet security threats.” Mu Chengjin’s target this time is the core protocol of the Internet architecture, “TCP/IP,” but clearly this claim that “TCP/IP is the root cause of Internet security threats” is untenable.

1. Nature of the Problem

First, the security issues of Internet protocols can be broadly divided into two categories: (1) design flaws in the protocol itself; (2) the protocol is secure in itself, but there are flaws in its implementation.

The TCP/IP protocol does indeed have flaws. The full name of TCP/IP is Transmission Control Protocol/Internet Protocol. To be precise, the TCP/IP protocol refers not only to the TCP and IP protocols but also to a suite of protocols that includes FTP, SMTP, TCP, UDP, IP, etc. It is the most fundamental communication protocol of the Internet, primarily serving to facilitate the transmission of information across multiple different networks.

Refuting Mu Chengjin's Claim That TCP/IP Is the Root Cause of Internet Security Threats

First, the TCP/IP transmission protocol specifies the standards and methods for communication between various parts of the Internet. It establishes a four-layer architecture – application layer, transport layer, network layer, and data link layer – ensuring timely and complete transmission and retrieval of network data. However, TCP/IP does not clearly distinguish between the concepts of services, interfaces, and protocols.

Secondly, the link layer of the protocol is not a layer in the usual sense. It is an interface, positioned between the network layer and the data link layer.

Third, the TCP/IP model does not differentiate between the physical layer and the data link layer. Additionally, since the TCP/IP protocol is established in a trusted environment, the network interconnection itself lacks consideration for security aspects.

At the same time, the openness of the Internet provides opportunities for “hackers”. The most common TCP/IP security attacks include: IP source address spoofing, network eavesdropping, routing attacks, and IP address authentication attacks.

The “Treck company’s TCP/IP protocol stack vulnerability” incident mentioned by Mu Chengjin obviously does not fall into this category. According to the recent analysis released by CNCERT regarding the Treck TCP/IP protocol stack vulnerability “Ripple20”, a total of 19 vulnerabilities are involved. The embedded TCP/IP protocol stack from Treck is primarily deployed in areas such as manufacturing, information technology, healthcare, and transportation systems. The most serious vulnerability disclosed in the Treck protocol stack is a heap-based buffer overflow vulnerability in the Treck HTTP Server component (CVE-2020-25066), which can allow attackers to crash and reset the target device, or even execute remote code.

From the investigation and analysis data above, it can be seen that the Treck company’s TCP/IP protocol stack vulnerabilities mainly affect the “application layer’s appearance” (quoting Mu Chengjin’s original wording). It is clearly not a security flaw of the TCP/IP protocol itself, but rather an issue with the implementation of the protocol.

2. How to Ensure the Security of the Protocol Itself

The Internet is not flawless, but to ensure the security of the protocol itself, one must follow the rules and regulations of the Internet. The evolution history of TCP/IP proves that “open, transparent technical processes”, “iterative improvements in protocol design and use”, and “large-scale deployment testing” are the cornerstones of TCP/IP’s success.

The key to the TCP/IP protocol becoming the core protocol standard of Internet architecture is that it does not rely on any specific computer hardware or operating system. Based on the principles of openness and mutual trust, the TCP/IP protocol constructs the “narrow waist” model of Internet architecture, allowing the Internet to be backward compatible with any network technology and transmission system, while maximizing support for a wide variety of applications, and ensuring that any innovation can be supported. Unified network address allocation and standardized high-level protocols can provide a variety of reliable user services. Thus, the TCP/IP protocol has become a “cyberspace” system that integrates various hardware and software.

“We reject kings, presidents, and voting. We believe in rough consensus and running code.” is the working principle of the Internet Engineering Task Force (IETF).

Refuting Mu Chengjin's Claim That TCP/IP Is the Root Cause of Internet Security Threats

Since its establishment in 1986, the IETF has always adhered to this workflow and mechanism. The members of the IETF mainly come from network designers, network equipment suppliers, network operators, and researchers from academia, as well as anyone interested in the industry. Their main task is to be responsible for the research and formulation of Internet-related technical specifications. Currently, the vast majority of international Internet technical standards originate from the IETF, such as the TCP/IP protocol and HTTP protocol.

The working mechanism of the IETF guarantees a kind of tacit understanding – “Who you are doesn’t matter; as long as you get the approval of the majority, or your code becomes a runnable program, your standard has the potential to become the standard for the entire Internet”.

A large amount of technical work within the IETF is undertaken and completed by various working groups. These working groups are formed based on different categories of research topics. Before establishing a working group, some researchers spontaneously conduct research on a specific topic through mailing lists. When the research is mature, they apply to the IETF to establish an interest group to prepare for the working group. Once the preparatory work is completed and recognized by the IETF, the working group is established and conducts specialized research within the IETF framework.

Typically, the vast majority of network standards begin in the form of RFCs. When an organization or group develops a standard or proposes an idea for a certain standard and wants to solicit external opinions, they will release an RFC on the Internet, allowing anyone interested in the issue to read and provide their feedback.

An RFC document generally goes through four stages before becoming an official standard: Internet Draft, Proposed Standard, Draft Standard, and Internet Standard. Although the IETF formulates standards, it does not enforce them, nor does it seek to control the Internet. RFCs undergo a large amount of verification and modification processes to form “runnable programs”, which actually reflects Deng Xiaoping’s statement that “practice is the sole criterion for testing truth” in the IETF context, fully embodying the spirit of freedom, openness, cooperation, and sharing, which are the core values of the Internet.

In fact, while the TCP/IP protocol is not perfect, due to the aforementioned guarantee mechanisms, the security of the current TCP/IP protocol has improved significantly compared to decades ago. Therefore, we can confidently say that the TCP/IP protocol itself does not have fatal security flaws, and based on the “open, transparent technical processes”, “iterative improvements in protocol design and use”, and the thorough process of “large-scale deployment verification”, the TCP/IP protocol will continue to exist as the core protocol of the Internet for a long time, and with the expansion of the Internet’s scale and continuous iterative evolution, its role and value will be recognized more clearly.

3. How to Ensure the Security of Protocol Implementation

Returning to the cause of this article, since the Treck company’s TCP/IP protocol stack vulnerability is a security flaw in protocol implementation, how can we ensure the security of protocol implementation?

Multiple Independent Implementation Versions

Taking the evolution process of the TCP/IP protocol as an example. Early Internet protocol research simultaneously developed different versions. The path in the United States was from ARPANET to the Internet. The UK developed the NPL protocol, France established its own CYCLADES protocol, and the Soviet Union set up the OGAS network project. In terms of research time, the Soviet Union was on par with the US ARPANET research.

In 1957, the Soviet Union launched the world’s first artificial satellite, which greatly stimulated the US, leading to the establishment of ARPA in 1958. In 1959, Soviet military officer Anatoly Kitov proposed the “RED BOOK” plan, believing that military networks could be adapted for civilian use, which led to the evolution of the famous OGAS project.

Alongside OGAS, the main Internet technology route was based on packet switching technology, including the message block switching proposed by Paul Baran in the US, the packet switching network NCP protocol proposed by Donald W. Davies in the UK, and the CYCLADES protocol proposed by Louis Pouzin in France.

The NCP protocol proposed establishing an interface between every computer or network, allowing networks to connect with each other. This connection does not require central control, but is directly connected through interfaces between various networks. In this way, network communication does not simply send data directly to the destination as centrally controlled, but transmits it like a relay race between different nodes in the network.

The CYCLADES network was a computer network protocol developed in France in 1972, designed by Louis Pouzin, and was first demonstrated in 1973. CYCLADES was the first to propose the concept of data delivery being the responsibility of each host rather than the network.

Comparing NCP, TCP, and CYCLADES protocols, NCP itself is based on IMP, which is an end-to-end model routed from router to router, without fully considering the end-to-end model of hosts. In this respect, CYCLADES has an advantage, as it is more similar to the transmission model of the current TCP/IP protocol. Therefore, it can be said that the direct genes of today’s Internet come more from the CYCLADES protocol rather than NCP.

Thus, from a succession perspective, the formal ancestor of the Internet protocol evolved from NCP to TCP, but the essential technical principles and design ideas of TCP/IP are inherited from the French CYCLADES plan.

Refuting Mu Chengjin's Claim That TCP/IP Is the Root Cause of Internet Security Threats

During the evolution of TCP/IP versions, each version has multiple implementations. For example, between 1975 and 1977, four versions were tested at BBN, University College London, Stanford, and MIT, finally leading to the release of the IPv4 network standard (RFC791) in 1981. The success of TCP/IP demonstrates the advantages of competitive research among multiple independent versions. It maximizes the avoidance of biased designs and ensures that on the basis of openness and transparency, strengths can be complemented and the most “optimal” implementation can be formed. Although perfection is not possible, efforts can be made to avoid potential issues in protocol implementation.

Open Source, Rapid Iterative Updates

Based on open source, innovation can stand on the shoulders of giants. The traditional development sequence first develops and perfects each sub-module. Only after all sub-modules are developed is the final assembly into a runnable system completed. The development and evolution of traditional telecommunications technology generally follows this logic.

However, Internet protocols are a process of rapid iterative evolution. In the 1970s, ARPANET initially adopted the NCP protocol, and in the 1980s, the update to the TCP/IP protocol promoted the maturity of IPv4. A significant milestone in the 1990s was the emergence of DNS and BGP protocols, and by 2000, the HTTP protocol became prevalent. Now, the IPv6 protocol has become the trend for the iterative evolution of the Internet.

Iterative development starts by determining the most essential and basic functions, allowing for the delivery of a runnable product in the shortest time. Subsequent improvements and updates are left for customers to experience, confirm, provide feedback, and continuously adjust and upgrade, allowing development to proceed in an evolutionary manner. The best way to ensure the security of protocol implementation is through open source, enabling rapid iterative updates.

Fixing Bugs

The Internet is a network of doing our best; security risks are inevitable, and the solution lies in timely detection, accurate analysis of the causes of problems, and providing means for repair. Upon discovery of the issue, Treck promptly assessed the vulnerabilities, identifying the types of vulnerabilities as primarily remote code execution vulnerabilities, denial of service vulnerabilities, and buffer overflow vulnerabilities, mainly caused by errors in Treck’s software library when handling different protocol data packets. It was also clearly defined that this series of vulnerabilities is not a general vulnerability of the TCP/IP protocol and does not pose a threat to the entire Internet.

According to the Ripple20 report, this series of vulnerabilities exists in the implementation of the TCP/IP underlying library developed by Treck, and the currently disclosed affected devices are all IoT devices, with no affected devices found in the PC device field. Subsequently, Treck immediately provided users with repair recommendations and released security updates for the Treck TCP/IPv4/IPv6 software library.

At the same time, major domestic network security companies also advised users to quickly deploy network scanning security monitoring platforms to scan and monitor internal networks, identify disclosed related brand assets, and monitor exposed ports, protocol interfaces, and other sensitive information.

Deploying network traffic analysis devices for deep packet inspection, in conjunction with network devices to discard erroneous data packets. Or deploying IDS/IPS to sign internal data packets and reject illegal communication data packets.

From various cases, it is clear that adhering to the principles of openness and sharing, ensuring “multiple independent implementation versions, open source, rapid iterative updates, and fixing bugs” is the correct way to ensure the security of protocol implementation.

4. The Security of TCP/IP Compared to Private Protocols

Which Is Higher

Based on logical reasoning, due to the guarantees provided by the above mechanisms and processes, TCP/IP (including IPv4 and IPv6) must have higher security than private protocols (such as IPV9).

The IETF has previously conducted relevant analyses and identified ten key factors that successful Internet protocols possess:

First, whether the protocol can meet global standards;

Secondly, whether it can be deployed as widely as possible;

Third, whether it is open source;

Fourth, whether it is free and freely usable;

Fifth, whether it has open specifications;

Sixth, whether the formulation of specifications is open;

Seventh, whether the development and maintenance processes conform to technical design;

Eighth, whether it is extensible;

Ninth, whether there are limitations in extensibility;

Tenth, whether its security threats have been addressed.

All of these are critical factors for the widespread success of this protocol. Clearly, IPV9 does not meet such standards at all.

Based on the facts, from the security problem-solving process mentioned in this article, we can also clearly see that the principles of openness and mutual trust are the soul of the Internet and the fundamental principles for solving Internet security issues. In contrast, IPV9, so far, has no evidence to show that it has solved security problems, including the security of the protocol itself and the security of protocol implementation. The so-called “self-controlled security” proposed by IPV9 has not been proven through any practical tests to demonstrate its resistance to attacks in real-world scenarios. In other words, the network security of IPV9 is currently just a self-proclaimed slogan, and its uncertainty and risk far exceed that of TCP/IP.

Today, the problem of network security is not something that a single product or company can solve, nor can it be accomplished independently by a single country. The globalization of the Internet means that you are within me, and I am within you; cyberspace deeply integrates with social life. The Internet is a global network, and naturally, security is a common problem faced by people worldwide. The boundaries of network security issues have been incorporated into the realm of global governance, requiring continuous strengthening of mutual trust, open cooperation, and collaborative innovation.

5. The Scale of IPV9 Deployment

Itself Is a Major Security Issue

IPV9 is pseudo-science, and this has long been concluded. First, the experts of the “Expert Committee for Promoting the Large-scale Deployment of IPv6” have long made a clear determination that “IPV9 network technology is speculative behavior that has not undergone technical verification, industry support, or practical application”.

Secondly, it is well known that IPV9 is a completely closed private protocol. Without universally verified private protocols, large-scale deployment will create a single management mechanism, a single management system, and personnel, which in itself is a huge security risk. IPV9 advocates isolating the Chinese Internet from the international Internet, claiming that this is the only way to achieve “self-innovation and security control”, which is a typical misunderstanding of the Internet.

To be blunt, this kind of closed-door security is an illusion that will only make the Chinese Internet a “barbaric land”. The so-called “self-control” is fundamentally a regression, abandoning China’s Internet “sovereignty”, severely weakening our “voice in the world of the Internet” and the political stance of “the Internet is a community of shared future for mankind”.

Currently, technologies such as 5G, AI, and privacy computing are constructing the digital economy while bringing about the new security scenarios of pervasiveness and complexity. Network security is gradually evolving into a normalized demand parallel to applications and business, even involving policies, laws, standards, technologies, and applications as a comprehensive issue. There is a need to form a collaborative, complementary, information-sharing, and co-constructing digital security governance system to respond to the increasingly blurred boundaries of cybersecurity governance.

This requires us to establish an open view of network security. Based on the reality of “interconnectivity”, it is clear that network security and mutual trust and openness are complementary and mutually reinforcing. The premise is to respect each other’s network sovereignty and rely on building a good order.

Adhering to openness and continuously contributing to global network security is our path to becoming a “network power”. We have the technological power to master core Internet technologies and build a community of shared future for mankind, and we must also have the vision of “favoring openness over closure, cooperation over confrontation, and win-win over monopolization”, in order to harmonize with all nations and achieve great unity, as the Internet path is straightforward.

Author: Zhang Tong

Related Articles

Mu Chengjin’s High-Level Critique of IPV9

How an April Fool’s Joke Became a Technological Scam: IPV9

Leave a Comment