What TCP/IP Got Right
A fixed base layer is not technical stagnation. It is the institutional precondition for everything built on top of it. The internet learned this. Most blockchain protocols still have not.
Keywords: TCP/IP, internet protocols, layered architecture, base layer, settlement layer, end-to-end principle, RFC, IETF, standards process, protocol stability, semantic stability, backwards compatibility, application layer innovation, extension architecture, hourglass model, narrow waist, blockchain base layer, immutability, ossification, protocol design, institutional economics, fixed rules, mutability, commitment
The internet’s foundational protocols have been remarkably stable for several decades. TCP and IP, in their core semantics, are recognisably the same protocols today that they were in the 1980s. The application ecosystem above them has changed beyond recognition: the web, streaming, mobile, social media, video calling, real-time gaming, distributed file systems, machine learning infrastructure, virtual currency networks, and an enormous range of other applications have all been built on a base layer that did not change to accommodate them. The base layer accommodated them by not changing — by providing a stable enough substrate that builders could specialise capital to it without continuous exposure to base-layer revision.
This stability is sometimes treated as a property of mature protocols generally, as if any protocol that survives long enough comes to behave this way. The treatment is wrong. The stability of TCP/IP was not the inevitable consequence of duration. It was an institutional achievement, the product of explicit choices about how the standards process would operate, what kinds of changes would be acceptable, and where innovation would be channelled. The internet got this right early, by accident in some respects and by design in others, and the result is the substrate on which most of the digital economy now runs. Most blockchain protocols have not got it right and are still proceeding as if base-layer mutability were a feature rather than a cost.
The argument of this essay is that a credibly fixed base layer is not technical stagnation; it is the institutional precondition for cumulative investment in the application layers above it. Innovation occurs above the fixed layer, not by changing it. The TCP/IP experience demonstrates this at scale, and the lesson generalises beyond networking. Most blockchain systems have inverted the priority — paying enormous attention to the consensus mechanism and improvising the institutional architecture — and they pay for the inversion in the form of underinvestment in the application layers that would otherwise constitute their economic value. I will set out what stability at the base layer actually means, why TCP/IP has it and how, what the alternative looks like, and what the experience implies for protocol design more generally.
1. What “fixed” actually means
The first move in this analysis is to define base-layer fixedness with sufficient precision that it can be distinguished from related but distinct properties. Several common formulations are used loosely and need to be separated.
Fixed is not the same as immutable. Immutability suggests that the rule set cannot be changed at all, ever, by any mechanism. Fixedness is a weaker property: the base-layer rule set is not subject to discretionary revision after deployment. Changes are possible, but they are constrained — by procedure, by coordination cost, by accountability, by the requirement of broad and credible consent — to the point that participants can specialise capital to the base layer without expecting the rules to be rewritten under them. Fixedness is institutional credibility about the stability of the rules, not metaphysical impossibility of change.
Fixed is not the same as static. A fixed base layer can support extensive innovation; it just channels the innovation into compatible extensions and higher layers rather than base-layer revision. The internet’s foundational protocols are fixed but the internet is not static; vast amounts of technical evolution have occurred above the stable base. Confusing fixed with static treats the base layer as if it were the entire system, which is the error the layered architecture is designed to avoid.
Fixed is not the same as inflexible to participant needs. A fixed base layer can be highly responsive to participant needs through the extension space it provides. Participants who require new functionality can build it above the base layer, in protocols and applications that compose with the base layer’s primitives. The flexibility is real; it is just located somewhere other than at the base layer itself. This relocation is what makes both the base layer’s stability and the upper layers’ flexibility possible at the same time.
Fixed is not the same as governance-free. A fixed base layer still has governance over its standards, its specifications, and its evolution. The governance is constrained — typically by procedural requirements, by the expectation of consensus among diverse parties, and by the historical commitment to stability — but it exists. Treating “fixed” as “governance-free” misdescribes the institutional architecture; the architecture is governance under tight constraints, not the absence of governance.
With these distinctions in place, fixedness can be specified more precisely. A base layer is fixed when the rule set defining it is credibly unlikely to be subject to discretionary revision after participants have made specific investments based on it. Credibility is the operative term: it does not require absolute commitment to never change, but it requires sufficient institutional structure that participants can rationally expect stability across the relevant investment horizon. The internet’s TCP/IP layer has this credibility. Most blockchain base layers, in their current form, do not.
2. The TCP/IP experience
The internet’s foundational protocols have evolved, but the evolution has been highly constrained, and the constraint is itself an institutional product.
IPv4 has been in continuous operation since the early 1980s. The protocol’s basic semantics — the address format, the header structure, the routing model, the relationship between addressing and forwarding — have been stable for that entire period. Extensions to IPv4 (CIDR, NAT, IPsec, various option fields) have been backwards-compatible additions to the existing semantics, not replacements of them. The transition to IPv6, which has been ongoing for more than two decades, is a separate, slow, deliberate process that has unfolded over a timeline incompatible with the discretionary revision that characterises some other protocol environments.
TCP has had a similar trajectory. The basic mechanics — connection establishment, sequencing, acknowledgement, congestion control, connection termination — have been stable, with extensions (selective acknowledgement, large windows, timestamps, fast open) accommodated through the protocol’s defined option space. New transport protocols (QUIC, SCTP) have emerged as additions to the transport layer rather than as revisions of TCP itself. The architectural pattern is consistent: stability at the base, evolution through extension or addition.
The same pattern holds at other layers in the canonical stack. DNS has had its core semantics stable since the 1980s, with extensions (DNSSEC, EDNS, IDN, various record types) added without disrupting existing operation. HTTP has evolved more substantially across versions, but each version has prioritised compatibility, and the core resource-identification and request-response model has been preserved. SMTP, despite well-known shortcomings, has been stable in its basic operation for decades, with extensions added through defined mechanisms.
The pattern is not universal — there have been disruptive changes, abandoned protocols, and incompatible extensions — but it is dominant. The dominant pattern of evolution in the internet’s foundational protocols has been: stable core, defined extension mechanisms, application-layer innovation, occasional addition of new protocols at higher layers, very rare modification of base-layer semantics, and, when modification has occurred, slow and deliberate process spanning years of public discussion.
What produced this pattern? Several institutional features matter.
Standards body design. The IETF (Internet Engineering Task Force) operates by rough consensus and running code, with formal documents (RFCs) that go through staged review and require demonstrated implementations before standardisation. The process is slow by design. It privileges interoperability over speed, and it tends to reject changes that would break existing implementations. The slowness is not a defect; it is the mechanism by which stability is produced.
Backwards compatibility as a norm. The standard practice has been to add new functionality through mechanisms that do not break old implementations: optional fields, version negotiation, capability advertisement, defined extension points. Where backwards compatibility cannot be maintained, the change becomes a new protocol rather than a revision of the existing one. This norm constrains what kinds of changes the standards process will accept.
End-to-end principle. The architectural philosophy of pushing functionality to the endpoints, with the network providing only minimal services, means that most innovation can occur at the endpoints without requiring changes to the network’s core protocols. The principle is not just a technical heuristic; it is a principle for distributing the location of change. Innovation at endpoints does not require base-layer revision.
Distributed authority. Standards-setting authority is distributed across multiple bodies (IETF, IEEE, W3C, ICANN, regional registries) with overlapping but distinct jurisdictions. No single body has authority over the entire stack. Coordinating change across the relevant bodies is expensive, which raises the coordination cost of any change that would require multi-body consent. The coordination cost is itself a stability mechanism.
Implementation diversity. Multiple independent implementations of the major protocols exist, including in operating systems with different ownership, geographic origin, and commercial interest. A change that requires coordinated update of all major implementations faces the practical difficulty of negotiating across diverse implementer interests, which raises the cost and lowers the frequency of such changes.
Long-running deployment. The installed base of internet infrastructure includes equipment that operates for many years, software that is updated slowly, and configurations that are rarely revisited. Any change to base-layer protocols must accommodate this installed base, and the accommodation requirement biases toward stability and against revision.
None of these features is unique to the internet, and none of them is impossible to replicate in other domains. The combination, applied to a particular technical substrate, produced the institutional credibility of base-layer stability that has supported decades of cumulative investment in the application layers.
3. The hourglass and the upper layers
The architectural metaphor commonly used to describe the internet’s structure is the hourglass: a wide top (many applications), a wide bottom (many physical and link-layer technologies), and a narrow waist (a small set of protocols that everything must support). The waist is the location of stability. The wide top is the location of innovation.
The metaphor captures something important. Innovation at the application layer does not require coordination across the entire stack; it requires that the application layer’s protocols compose with the narrow waist below. Once an application layer protocol is defined to operate over IP and TCP (or UDP, or now QUIC), the application can innovate freely without concerning itself with what the network or the link layer is doing. The base-layer fixedness is what permits the upper-layer freedom.
This is the central economic point. Cumulative investment is possible at the application layer because the base layer is stable. A firm building a streaming service can specialise capital to a particular set of protocols, expecting those protocols to be available across the relevant investment horizon. The firm’s investment is not specific to a particular base-layer revision schedule; it is specific to the base-layer semantics, which the institutional architecture has made stable. The firm’s exposure to base-layer change is therefore low, and the firm’s willingness to invest is correspondingly high.
If the base-layer semantics were instead subject to discretionary revision — if some coalition could rewrite the IP packet format on a quarterly schedule — the application-layer investment would be much harder to justify. The firm would have to either build versioned support for many possible base-layer configurations, or accept that some fraction of its investment would be stranded with each revision. The expected return on application-layer investment would fall, and less of it would occur. The hourglass would collapse, with the upper layer thinning to match the instability of the waist.
This collapse is what is being observed in some blockchain ecosystems where base-layer revision is frequent and discretionary. Application-layer development is shorter-term, more tightly coupled to specific base-layer versions, less willing to commit to long-running infrastructure, and more skewed toward applications whose value can be realised within a single base-layer regime. The pattern is exactly what the analysis predicts. Mutable base layers produce shallow application layers.
The reverse pattern — deep application layers above credibly stable base layers — is what TCP/IP has produced and what blockchain systems have generally not. The difference is not in the technical sophistication of the consensus mechanism; it is in the institutional architecture around base-layer change. Sophistication of the consensus mechanism does not translate into application-layer depth if the rules under which the consensus operates are subject to revision.
4. Compatible extension as the alternative to base-layer change
If base-layer change is constrained, but participant needs continue to evolve, where does the evolution happen? The answer is the extension space defined by the base layer itself, and the higher layers built on top of it.
Defined extension points. Base-layer protocols typically include explicit extension points — option fields, version negotiation, capability advertisement, sub-protocol identifiers — that allow new functionality to be added without modifying the protocol’s core semantics. Extensions are constrained: they must operate within the framework the base layer provides, and they cannot redefine what the base layer means. But within these constraints, considerable functionality can be added. Most of the practical evolution of TCP and IP has occurred through extensions of this kind.
Higher-layer protocols. Above the base layer, additional protocols can be defined that compose with the base layer to provide new services. The web (HTTP), real-time communications (RTP, WebRTC), encrypted transport (TLS), distributed file access, content distribution, and many other capabilities are higher-layer protocols built on top of the stable base. Each of these protocols can evolve, can be replaced, and can compose with other protocols at the same or adjacent layers, without requiring changes to the base.
Application-layer innovation. At the highest layer, applications can innovate freely. The applications use the lower-layer protocols as services, and the services’ stability is what makes the application investment justifiable. Application innovation is not constrained by the base layer’s stability; it is enabled by it.
Parallel protocols. Where genuinely new functionality is required that cannot fit within the existing base layer’s framework, the typical response is to introduce a new protocol that operates alongside or in parallel to the existing one, rather than to revise the existing one. QUIC alongside TCP, IPv6 alongside IPv4 (during transition), DNS-over-HTTPS alongside traditional DNS, all illustrate this pattern. The parallel protocol gives users of the new functionality access to it without disrupting users of the old.
The important property of all these mechanisms is that they channel evolution into locations that do not destabilise specific investment in the base layer. A firm that has invested in IP-based infrastructure is not exposed when QUIC is introduced, because QUIC operates above IP and does not change IP’s semantics. A firm that has invested in TCP-based applications is not exposed when DNS-over-HTTPS is introduced, because DNS-over-HTTPS operates at a different layer and does not change TCP. The location of evolution matters because it determines which investments are exposed and which are not.
The contrast with discretionary base-layer revision is sharp. A firm whose investment is exposed to base-layer revision must hedge against the revision; a firm whose investment is exposed only to higher-layer evolution can specialise much more aggressively, because it knows that the higher-layer evolution is operating above its base-layer commitment, not against it. The first firm invests less. The second firm invests more. The aggregate consequence is that the first system has shallower investment than the second, and the difference compounds over time.
5. Why blockchain systems have generally not done this
Blockchain protocols have, in most cases, not adopted the layered architecture that produced TCP/IP’s success. The reasons are partly historical, partly cultural, partly architectural, and partly economic.
Historical inheritance. The earliest blockchain protocols were designed by individuals and small teams without the institutional infrastructure of a standards body. The standards-process discipline that constrained TCP/IP evolution was not present, and the conventions that grew up around blockchain protocols were correspondingly looser. Once these conventions were established, they became self-reinforcing: subsequent protocols imitated their predecessors, and the institutional architecture that would have constrained base-layer change did not develop.
Cultural orientation toward change. The dominant culture of blockchain development has favoured rapid iteration over stability. The phrase “move fast and break things,” whatever its merits in other contexts, was widely adopted in blockchain development as a normative standard for protocol evolution. The cultural valorisation of frequent change is not consistent with the institutional architecture that produces stability, and where the culture and the architecture conflict, the architecture has typically given way.
Architectural choices. Many blockchain protocols were designed with rich base layers — virtual machines, complex transaction types, integrated governance mechanisms — rather than minimal base layers with extension space. The richer the base layer, the more functionality requires base-layer changes when participant needs evolve. A protocol with a Turing-complete virtual machine in its base layer cannot easily channel evolution into higher layers, because too many decisions are made at the base layer. The architectural choice locks the protocol into a path of base-layer revision rather than higher-layer extension.
Economic incentives of designers. The teams that design and operate blockchain protocols often have economic interests in continued change. New features generate news, new versions justify continued funding, new capabilities differentiate the protocol from competitors. The economic incentive for the development team is toward visible activity at the base layer, which is in tension with the institutional discipline that would produce stability. The TCP/IP standards process worked partly because the standards bodies had no equivalent economic incentive to produce visible base-layer change; their authority came from the stability they provided, not from the changes they introduced.
Insufficient appreciation of the trade-off. The economic cost of mutable base layers — the underinvestment that follows from participants’ rational discounting of asset specificity — has not been widely understood in the field. The costs are real but distributed, and the beneficiaries of stability are not generally as visible as the proponents of any particular change. The asymmetry has meant that the case for stability is rarely made with the same vigour as the case for any specific revision, and the cumulative effect over time has been more revision than the cost-benefit analysis would justify.
None of these factors is irreversible. Cultural orientations can change, architectural choices can be revised in the design of new protocols, economic incentives can be restructured, and the trade-off can be more clearly understood. The fact that most blockchain protocols have not yet absorbed the TCP/IP lesson does not mean that they cannot. It means that they have not, and that the work of doing so has not generally been undertaken.
6. The minority that has
The picture is not uniformly negative. Some blockchain systems have, by design or by cultural choice, treated their base layers as fixed, with evolution channelled into higher layers. Where this has been done, the consequences have been observable.
Where the base layer is treated as a settlement layer with stable semantics, application-layer development has tended to invest more deeply, build longer-running infrastructure, and accept lower discount rates on its investments. Application developers can plan multi-year projects without continuous exposure to base-layer revision, and the investment compounds. The application layer in such systems is qualitatively different from the application layer in systems with mutable base layers — more specialised, more institutionally embedded, more willing to commit specific assets.
The systems that have adopted this approach have tended to do so through some combination of technical and institutional commitments. Technically, they limit the base layer to a minimal set of primitives — transaction validation, ordering, settlement — and push complex functionality to higher layers. Institutionally, they make changes to the base layer rare, deliberate, and subject to broad consent across diverse parties, with no single coalition able to effect change unilaterally. The combination produces credibility about base-layer stability that participants can act on.
The credibility is not without cost. Systems that have credibly fixed base layers cannot adapt their base layers in response to short-term concerns, cannot incorporate new features at the base layer when application demand suggests they would be useful, and cannot fix base-layer design errors without substantial cost. These are real limitations, and a system that maximises base-layer flexibility would not face them. The point is that the limitations are the cost of credibility, and the benefit — sustained cumulative investment in higher layers — is what credibility purchases.
The trade-off between adaptability and commitment is real, and different systems should resolve it differently depending on their objectives. A system optimising for short-term experimentation should accept high mutability and pay the price in shallow application layers. A system optimising for long-term settlement should accept low mutability and pay the price in slow base-layer evolution. The choice is legitimate either way; the failure is to make the choice without recognising it as a trade-off, and to claim the benefits of both regimes simultaneously.
Most blockchain systems have, in practice, made implicit choices that fall closer to the high-mutability end of the spectrum, while marketing themselves as if they had the credibility benefits of the low-mutability end. This is the inconsistency that produces the gap between the rhetoric of decentralisation and the equilibrium pattern of investment. Participants are responding rationally to the actual mutability profile, not to the marketed one, and the response is observable in the depth of the application layers.
7. The institutional ingredients of fixedness
If base-layer fixedness is desirable in particular settings, the question becomes how to produce it. The TCP/IP experience offers a partial answer. The institutional ingredients are not mysterious, but they require sustained design effort.
A standards process with stability bias. The process by which proposed changes are evaluated and accepted should be biased toward stability, not toward speed. This means slow, deliberate review; explicit consideration of backwards compatibility; willingness to reject changes that fail compatibility tests; and willingness to redirect proposed changes into extension mechanisms or higher layers rather than base-layer revisions. The bias does not eliminate change; it constrains it.
Distributed standards authority. Authority over the standards should be distributed across multiple parties with diverse interests. A single party with authority over the standards is a coalition that can revise the standards. Distributed authority raises the coordination cost of revision and produces credibility through the difficulty of obtaining concurrent consent across diverse parties.
Multiple independent implementations. The base-layer protocols should have multiple implementations maintained independently, with no single team or organisation in control of all of them. Multi-implementation environments raise the cost of changes that require coordinated update, and they provide a check on any single team’s ability to redefine the protocol through implementation changes alone.
Backwards-compatible extension mechanisms. The base-layer protocols should include defined mechanisms for compatible extension, so that new functionality can be added without revising existing semantics. Without such mechanisms, every new feature requires a base-layer change, and the institutional pressure to accept those changes accumulates faster than the institutional architecture can constrain.
A deep installed base. The protocol’s installed base of users, applications, and infrastructure should be sufficiently deep that any change which would invalidate existing investments faces real opposition from real participants whose interests would be harmed. A shallow installed base provides no such constraint; a deep one does. This is partly a function of duration — installed bases accumulate over time — and partly a function of how much specific investment the protocol has attracted, which itself depends on prior credibility about stability. The relationship is recursive: stability attracts investment, investment produces an installed base, the installed base reinforces stability.
Cultural commitment to stability. The participants in the standards process and the operators of the protocol should treat base-layer stability as a value, not as an obstacle. Where the dominant culture treats change as virtue and stability as stagnation, the institutional architecture is fighting upstream against the disposition of the people who operate it. Where the culture treats stability as the foundational achievement that makes everything else possible, the architecture has cultural support and operates more reliably.
None of these ingredients is sufficient on its own; the combination is what produces credibility. The TCP/IP experience demonstrates the combination operating at scale over decades. Other domains — financial market infrastructure, certain regulatory frameworks, some legal codes — show similar patterns where the combination has been achieved. The mechanism is not protocol-specific; it is the general institutional logic of how stability is produced under conditions where parties have continuing interest in change.
8. Standard objections
“Fixedness produces ossification; stable systems eventually fail because they cannot adapt.” This is a real risk, but it is not unique to base-layer fixedness. Any institutional arrangement, including those with high mutability, can fail to adapt to circumstances; the failure mode of mutable systems is different (excessive change, capture, drift) but not necessarily less severe. The relevant question is which failure mode is more costly in a given setting, not whether failure is possible. For settlement infrastructure with high asset specificity, the costs of excessive change are likely to dominate the costs of insufficient change, and the institutional architecture should be biased accordingly. For experimental systems with low asset specificity, the trade-off may resolve in the opposite direction.
“The TCP/IP experience does not generalise; the network domain is too specific.” The mechanism that produced TCP/IP’s stability — distributed authority, standards-process discipline, backwards compatibility, multiple implementations, deep installed base, cultural commitment — is not specific to networking. The same mechanism produces stability in other domains where it has been applied, and fails to produce stability where it has not. The networking-specific features (the end-to-end principle, the hourglass architecture) are details of how the mechanism interacts with the technical substrate; they are not the source of the stability. The general lesson — that base-layer credibility is an institutional achievement requiring specific design — is portable.
“The internet has had its share of base-layer problems; pretending otherwise is selective.” Correct. The internet has had base-layer problems: the IPv4 address exhaustion, persistent issues with BGP routing security, ongoing weaknesses in DNS, the slow IPv6 transition, and others. The argument is not that the internet’s base-layer protocols are perfect, but that they have been remarkably stable relative to the alternative, and that the stability has supported cumulative investment that the alternative would not have permitted. Imperfect stability that supports deep application layers is preferable to perfect responsiveness that supports only shallow ones, in the relevant trade-off.
“Cumulative investment in blockchain systems has been substantial despite mutable base layers.” Investment has been substantial in absolute terms, but the comparison should be with what investment would have been under credibly fixed base layers, not with zero. The argument is that the equilibrium investment is lower than it would otherwise be, and the depth of the application layers is correspondingly shallower. This is not directly observable as a counterfactual, but it is consistent with the patterns of which kinds of applications develop, how long they sustain, how much specific capital they involve, and how they price their integration with the base layer.
“Standards processes are slow and exclude legitimate participation.” Slow is the point. A fast standards process is one that produces rapid base-layer change, which is the property under critique. A standards process that excludes legitimate participation is a defect in the process design, not an inherent feature of standards processes generally. The TCP/IP process is open to participation by anyone willing to engage with the technical work; the constraint on participation is the willingness to do the work, not the formal exclusion of any party. Other standards processes have varied in their inclusiveness, and the choice of process design affects how the participation constraint operates.
“Fixed base layers prevent technical progress.” Fixed base layers prevent technical progress at the base layer specifically; they do not prevent it at the higher layers, where most progress in any layered system actually occurs. The historical record of internet protocol evolution is not an absence of technical progress; it is a redirection of technical progress to locations where it can occur without disrupting specific investment. This redirection is what makes cumulative progress possible. The alternative — base-layer progress at the cost of stranded specific investment — is not progress at the system level even when it is progress at the base-layer level.
9. The cost side of the ledger
An honest treatment of base-layer fixedness must acknowledge what fixedness costs. The argument is not that fixedness is universally desirable; it is that fixedness has specific benefits that are routinely understated, and the trade-off should be made explicitly rather than by default in either direction.
The costs of fixedness are real. A fixed base layer cannot incorporate beneficial changes that would require base-layer revision, no matter how clearly beneficial. Errors in the original base-layer design are difficult to correct, and the system must accommodate the errors through workarounds at higher layers. New requirements that fit poorly into the existing base-layer framework must either be implemented inefficiently above the base layer or addressed through parallel protocols rather than integrated revisions. The pace of base-layer evolution is, by design, slow. Some changes that would be straightforward under high mutability become institutionally impossible under credible fixedness.
These costs are concentrated in specific situations. They are large when the base-layer design is significantly flawed in ways that the original designers did not anticipate; they are small when the base-layer design is robust enough to accommodate evolving requirements through extension. They are large when the requirements of higher layers cannot be efficiently implemented without base-layer support; they are small when the requirements can be addressed at higher layers without efficiency penalties severe enough to matter. They are large when the system has just been deployed and the costs of getting the design wrong have not yet been amortised; they are small when the system has been operating long enough that the design has been validated by the experience.
The trade-off is therefore situational. A young system with substantial uncertainty about requirements and limited experience with the design should not commit prematurely to base-layer fixedness; the option value of revision is high and the asset specificity of investment is low. An older system with substantial experience and substantial accumulated specific investment should commit to fixedness; the option value of revision is lower and the asset specificity is higher. The transition from the first regime to the second is itself a governance question, and one that is rarely addressed explicitly.
The internet’s transition was largely organic. The base-layer protocols stabilised through a combination of accumulated installed base, standards-process discipline, and cultural commitment to interoperability, without any single moment at which the system formally committed to fixedness. The transition worked, but it is not a model that other systems can necessarily replicate, because it depended on path-dependent features of the internet’s history. Systems that wish to produce credible fixedness more deliberately need to construct the institutional architecture explicitly rather than wait for it to emerge.
10. Closing
A credibly fixed base layer is the institutional precondition for cumulative investment in higher layers. The TCP/IP experience demonstrates this at scale: decades of remarkable stability at the network and transport layers have supported an enormous and continuously evolving application ecosystem above them. The stability was not accidental and was not the inevitable consequence of duration; it was an institutional achievement, the product of standards-process design, distributed authority, implementation diversity, and cultural commitment to backwards compatibility. The achievement is replicable, but it requires specific design effort.
Most blockchain protocols have not undertaken this design effort. They have, with limited exceptions, treated base-layer mutability as a feature rather than a cost, accumulated discretionary authority over rule change in identifiable coalitions, and produced application layers whose depth and specific investment are correspondingly shallower than they would be under credible fixedness. The pattern is consistent with the analysis. Mutable base layers produce shallow application layers because participants who anticipate base-layer revision discount their specific investments accordingly.
The solution is not to declare that all blockchain systems should adopt the TCP/IP architecture. The trade-off between adaptability and commitment is real, and different systems should resolve it differently depending on their objectives. A system optimising for short-term experimentation can accept high mutability and pay the price in shallow application layers; a system optimising for long-term settlement should accept low mutability and pay the price in slow base-layer evolution. The choice is legitimate either way, and the institutional architecture should match the choice.
What is not legitimate is the current pattern, in which blockchain systems make implicit choices toward high mutability while marketing themselves as if they had the benefits of low mutability. Participants respond to the actual mutability profile, not to the marketed one, and the response is observable in the depth of the application layers. Systems that want deep application layers need credibly fixed base layers, and credibly fixed base layers need the institutional architecture that produces credibility. The architecture is not free; it requires design choices that constrain what the system can do at the base layer, in exchange for what the system can support above it.
The choice between mutable and fixed base layers is a fundamental architectural choice. It should be made explicitly, with awareness of what each choice costs and what each choice purchases. The TCP/IP experience is not a normative standard for all systems; it is an existence proof that base-layer fixedness is achievable at scale and that the institutional ingredients of fixedness are identifiable and replicable. The lesson is available. Whether it is absorbed depends on whether the field treats the lesson as relevant.
The basic claim of this essay can be stated as a single proposition. A fixed base layer is not technical stagnation; it is the institutional precondition for cumulative investment in the layers built on top of it. Innovation occurs above the fixed layer, channelled into compatible extensions and higher-layer protocols, not by revising the base layer itself. Systems that cannot achieve credible fixedness at the base layer cannot achieve cumulative investment at the higher layers, and the depth of their application ecosystems will be correspondingly limited. The trade-off between mutability and commitment is real, and the resolution depends on the system’s objectives. What is not optional is the recognition that the trade-off exists. Treating mutability as costless is not a design choice; it is the failure to make one.
References mentioned in passing: J. H. Saltzer, D. P. Reed, and D. D. Clark, “End-to-End Arguments in System Design,” ACM Transactions on Computer Systems (1984); D. C. North, Institutions, Institutional Change and Economic Performance (1990); O. E. Williamson, The Economic Institutions of Capitalism (1985); P. A. David, “Clio and the Economics of QWERTY,” American Economic Review (1985).



**Yes, TCP/IP is a major factor in the "ossification" of networks, particularly the Internet.**
### What "Ossification" Means Here
**Protocol ossification** refers to the loss of flexibility, extensibility, and evolvability in network protocols. Once a protocol becomes deeply entrenched (through massive deployment, middleboxes like firewalls/NATs/routers that assume specific behaviors, and dependencies across the ecosystem), it becomes extremely hard to change or innovate at the core level. The network "hardens" like bone—reliable and stable, but rigid.
### How TCP/IP Contributes to This
- **Ubiquity and the "Narrow Waist"**: TCP/IP (especially IP as the internetworking protocol) forms the core "narrow waist" of the Internet architecture. It succeeded wildly because it was simple, abstracted over many underlying networks, and enabled the explosive growth of the Internet. But its dominance means almost everything assumes IPv4/IPv6 + TCP/UDP behavior. Changing it risks breaking vast parts of the global network.
- **Middlebox Interference**: Devices in the middle of paths (routers, firewalls, load balancers, etc.) inspect and often modify or block traffic based on expected TCP/IP patterns. Unknown options, new behaviors, or non-standard packets get dropped → this discourages evolution of TCP itself (e.g., new congestion control or options).
- **Deployment Inertia**: IPv4-to-IPv6 transition has been painfully slow for decades despite IPv4 exhaustion. Core changes take years or fail due to the need for universal backward compatibility.
- **Success Breeds Rigidity**: TCP/IP's reliability and widespread adoption (it "networked the networks") made it the de facto standard, but that same success froze much of the stack. New transport protocols or IP extensions face huge hurdles.
### Examples and Workarounds
- **TCP-specific ossification**: Hard to add features; middleboxes break unknown TCP options.
- **QUIC (used in HTTP/3)**: Designed over UDP with encryption to *resist* ossification—middleboxes can't easily inspect or tamper with inner details.
- Other efforts (e.g., in IETF) focus on ways to evolve without breaking the existing base.
### Counterpoints
TCP/IP itself was designed pragmatically (not rigidly like the theoretical OSI model) and has proven remarkably adaptable at higher layers (e.g., via new applications, TLS, etc.). The ossification isn't total—innovation happens *around* it (CDNs, anycast, overlay networks, etc.). But at the fundamental IP and transport layers, yes, it's a recognized problem in networking research.
In short: TCP/IP didn't *intend* to make networks ossified, but its overwhelming success and the ecosystem that grew around it absolutely did contribute to that outcome. This is a well-discussed topic in Internet architecture circles.