Why Hash Power Is Not Security
The standard attack-cost calculation is incomplete. Real protocol security depends on consensus cost, capital at risk, coordination cost, and accountability — and most analyses count only the first.
Keywords: protocol security, blockchain security, 51% attack, hash power, proof of work, proof of stake, attack cost, capital at risk, coordination cost, legal accountability, institutional security, Nakamoto coefficient, decentralisation metrics, consensus security, governance security, slashing, validator concentration, sybil resistance, economic security, rule change risk
The standard analysis of blockchain security runs roughly like this. An attacker who wishes to subvert the system must acquire a controlling share of consensus capacity — hash power in a proof-of-work system, bonded stake in a proof-of-stake system, validator votes in a Byzantine-agreement system. Acquiring that share costs something. The system is secure if the cost of acquiring it exceeds the gain from subverting it. Formally, security holds when αV < C, where α is the share of consensus capacity required to attack, V is the value extractable by the attacker, and C is the cost of acquiring the capacity. The condition is intuitive, the parameters are quantifiable, and the analysis has produced a substantial literature.
The analysis is also seriously incomplete. It addresses one specific attack vector — direct manipulation of the consensus process by an actor who has acquired sufficient consensus capacity — and treats this as if it exhausted the meaning of protocol security. It does not. There are several other ways in which a protocol can fail to provide what its participants understood themselves to be paying for, and the standard calculation captures none of them. The participants are not making investments to be safe from 51% attacks. They are making investments to be safe from arbitrary changes to the terms under which their investments have economic meaning. The 51% attack is one threat to that. It is not the only one, and depending on the system it may not even be the most important one.
The argument of this essay is direct: protocol security is not the cost of attacking the consensus mechanism. It is the cost of violating the protocol commitment. The institutional security condition has four terms — consensus cost, capital at risk, coordination cost, and accountability — and three of them are routinely missing from the standard calculation. A system can satisfy the consensus security condition while failing the institutional security condition, and the failure mode is not exotic. It is the most common failure mode in the field. I will work through the four terms, show why each matters, and explain what the analysis implies for how security should be measured and reported.
1. What the standard condition gets right
Begin with what is correct in the standard analysis, because it is correct as far as it goes. A consensus mechanism that can be subverted at low cost relative to the value at stake is not providing economic security. The cost-benefit calculation captures something real. If an attacker can acquire 51% of the hash power for $10 million and use it to extract $100 million through double-spending, the system is not secure in any meaningful sense. The arithmetic is straightforward and the analytical move — comparing attack cost to attack gain — is sound.
The standard condition also captures, with reasonable accuracy, the security properties of a particular kind of attack: an external adversary who is not part of the existing consensus operator population, who acquires capacity specifically to mount the attack, who acts against the existing protocol rules, and whose goal is direct value extraction through transaction reversal or censorship. For this specific scenario, the cost-benefit calculation gives a useful first approximation.
The literature has refined the basic condition in several directions. Budish (2018) and others have argued that the relevant cost is not the spot cost of acquiring capacity but the marginal cost of additional capacity at the relevant scale, with consideration of the supply curve for hash power, hardware availability, and the time required to acquire and deploy capacity. The relevant gain is not the gross value extractable but the net value after deducting the costs of executing the attack and the loss in capacity value if the attack is detected. These refinements improve the calculation without changing its structure. They are about getting C and V right within the same framework.
What the literature has not done, with comparable rigour, is question whether the framework itself captures the full security problem. The standard framework treats the protocol rule set as fixed and treats the threat as external manipulation of the consensus process operating under those rules. Both assumptions are problematic. The protocol rule set is in many systems mutable, and the most economically consequential threats often come not from external attackers but from coordination among the actors who already hold consensus capacity, or from those who hold authority over the rules themselves.
To capture the full problem, the security condition needs to be expanded. The standard condition is one term. The institutional condition is four.
2. The four-term condition
Let α denote the share of consensus capacity required for an attack, and V the value extractable through the attack. The institutional security condition has the form:
αV < C + I + K + R
where:
C is the direct technical or consensus cost of acquiring the capacity needed to subvert the protocol;
I is the capital at risk — the value of in-system holdings, infrastructure, reputation, and ongoing income that would be destroyed or impaired by a successful attack;
K is the coordination cost of organising whatever group of actors is required to execute the attack;
R is the legal, regulatory, professional, and reputational accountability that would be imposed on identifiable participants in the attack.
The standard condition is αV < C. The institutional condition adds three further terms, each of which represents a cost that constrains attack behaviour but that does not appear in the consensus calculation. The point of the expansion is not to argue that the additional terms are always larger than C — sometimes they are smaller, and in some systems they are nearly zero — but that they are part of what determines whether an attack actually occurs, and they vary across systems in ways that significantly affect the security ranking.
Consider each term in turn.
3. Consensus cost (C)
This is the term that the standard analysis already captures, and the analysis of it is largely correct. C is the cost of acquiring sufficient capacity in the consensus process to subvert it: the cost of hash power, the cost of bonded stake, the cost of validator slots. The cost is measurable, in principle, by reference to the supply of capacity, the price of acquiring marginal capacity, and the timeline over which acquisition can be completed without alerting defenders.
Several refinements are worth noting. First, C is not the same as the equilibrium cost of consensus operation. The equilibrium cost is paid continuously by all operators; C is the additional cost an attacker must incur to acquire enough capacity to deviate. These can diverge substantially, particularly where existing operators have made specific investments that they would not be willing to sell at the equilibrium price.
Second, C depends on whether the attack is detected during acquisition. An attacker who can acquire capacity covertly faces lower cost than one whose acquisition triggers defensive responses — price spikes in capacity markets, refusal of vendors to sell, coordinated countermeasures by defenders. In small markets, covert acquisition is difficult and detection lowers the effective C for defenders by raising it for attackers.
Third, C is bidirectional. An attacker can acquire capacity, but a defender can also acquire capacity to dilute the attacker’s share. The relevant security metric is the cost of net capacity acquisition by the attacker, accounting for defensive response. This is a dynamic problem, and static cost calculations can mislead.
Fourth, C depends on the consensus mechanism’s particular cost structure. Proof-of-work systems have C denominated largely in hardware and energy. Proof-of-stake systems have C denominated largely in bonded native tokens, which raises a circularity: the cost of attacking the system is paid in the value of the system being attacked. In a system whose token value collapses on a successful attack, the realised cost of attack is higher than the nominal cost; in a system where the attacker can short the token before attacking, the realised cost can be lower.
None of these refinements changes the basic point that C is part of security. They refine what C means in different settings. The work to compute C rigorously is not trivial, but it is bounded, and the field has developed reasonable methods for doing it. The problem is not that C is wrong. The problem is that it is alone.
4. Capital at risk (I)
The second term is the value that participants in the attack would lose if the attack succeeded but its consequences were realised. This is not the cost of acquiring attack capacity. It is the value of what the attacker already holds in the system, which would be impaired or destroyed by the attack.
Consider a large mining operation that has invested in dedicated hardware, established power agreements, hired technical staff, and built operational infrastructure tied to a particular protocol. The hardware has substantial salvage value only if the protocol continues to function and to be valuable. A successful attack on the protocol — particularly one that is detected and that damages user confidence — collapses the value of the protocol’s native asset and, through that collapse, collapses the value of the hardware that exists to mine it. The miner who participates in the attack does not merely face the cost of acquiring incremental capacity. The miner faces the destruction of capital that would have produced income for the remaining life of the equipment.
The same logic applies, with adjustments, to validators in proof-of-stake systems. Validators have posted bonded stake denominated in the protocol’s native asset. A successful attack reduces the value of that asset. Even before any slashing penalty applies, the validator who participates in an attack experiences a capital loss equal to the decline in the value of their bonded position. The slashing penalty, where it applies, adds an additional cost on top of this.
Capital at risk is not limited to direct holdings. It includes the value of ongoing relationships — payment processors who would lose merchant integrations, exchanges who would lose user trust, infrastructure providers who would lose customer contracts. It includes the value of brand and reputation: an institutional participant whose involvement in an attack became known would face costs in other markets that depend on their reputation for integrity. It includes the value of regulatory standing: an institution whose participation in an attack triggered enforcement action would face direct financial penalties and indirect operational costs.
The size of I depends on who the attacker is. An external attacker with no existing position in the system faces I = 0; the standard analysis is correct that C is the binding constraint for this case. But for any attacker drawn from the existing population of operators — and for many realistic attack scenarios this is the relevant population, because they are the ones who already have the capacity — I is large and may dominate C.
This has a structural implication. The systems most vulnerable to consensus attack are those where the attack capacity is held by actors with low capital at risk. Where consensus capacity is concentrated in actors who are deeply invested in the system’s continued value, those actors face a strong personal incentive against attacks that would destroy that value. Where consensus capacity is concentrated in actors who can dispose of their capacity quickly and have no other stake in the system, no such incentive operates. Two systems with identical C can therefore have very different actual security, and the difference is captured by I, not by C.
5. Coordination cost (K)
The third term is the cost of organising the actors needed to execute an attack. The standard analysis treats the attacker as a single decision-maker — an individual or firm who acquires capacity and acts. In practice, many attacks require coordination across multiple parties. A 51% attack on a system whose consensus capacity is distributed across hundreds of operators requires either acquiring enough capacity to overcome them all, or coordinating enough of them to act together. The latter is often cheaper in capacity terms but more expensive in coordination terms.
Coordination cost has several components. There is the cost of identifying potential coordinating parties — finding actors with the relevant capacity who are willing to consider participation. There is the cost of communicating among them without alerting defenders. There is the cost of negotiating terms — who gets what share of the gain, who bears what risk, who acts when. There is the cost of ensuring that no participant defects, either by exposing the plan to defenders or by failing to perform their part of it. There is the cost of distributing proceeds without leaving a trail that allows participants to be identified after the fact.
Each of these costs scales with the number of parties required. A coordinated action among three parties is dramatically simpler than a coordinated action among thirty. The security implication is that systems with consensus capacity distributed across many independent operators have higher K than systems where capacity is concentrated, holding C constant. A system where 51% of consensus capacity is held by a single actor has K = 0 for that actor; a system where 51% requires coordinating thirty actors has K that may be substantial.
This is one of the things “decentralisation” was supposed to deliver: high K against coordinated attacks. The promise is real, but it is more contingent than the rhetoric suggests. K is high when the actors required to coordinate have independent interests, are not subject to common control, and have no existing communication channels through which they would coordinate ordinary business. K is much lower when the actors share sponsors, share clients, share infrastructure, or have established relationships through which coordination is already routine. A nominally distributed validator set whose members all use the same client, run the same hosting provider, and attend the same industry coordination meetings has lower K than the raw count of validators would suggest.
The K term also captures something the consensus-cost analysis cannot: the cost of governance-level attacks that do not require subverting the consensus mechanism but instead require changing the rules under which it operates. A coalition that can effect a base-layer rule change is not bound by the consensus security condition αV < C; the coalition is operating on a different layer entirely. What constrains it is the cost of forming, the cost of acting, the cost of having its actions accepted by adoption-critical parties, and the cost of withstanding the response of those who object. These are coordination costs in the precise sense — costs of organising and sustaining a group action — and they do not appear in any consensus-cost calculation.
The general point is that high K is what makes a system robust to coordinated rule change, not just to coordinated 51% attacks. Where K is low — where a small number of actors can effect changes through routine coordination — the system is governance-vulnerable even if the consensus mechanism is technically sound. The standard security analysis misses this because it does not represent the governance layer at all.
6. Accountability (R)
The fourth term is the cost imposed on identifiable participants in an attack through legal, regulatory, professional, or reputational mechanisms. R is the term that the standard analysis is most reluctant to engage, partly because it requires identifying participants and partly because it requires recognising that legal and reputational costs are real economic costs.
For an external attacker who is anonymous, has no other commitments, and operates from a jurisdiction with no enforcement reach, R is approximately zero. This is the case the standard analysis has implicitly in mind. But this case is far from universal, and for the actors who actually have the capacity to mount serious attacks on operational protocols — large mining firms, professional staking services, infrastructure providers, exchanges — R is large and often dispositive.
Consider what happens to a publicly known mining firm that participates in a coordinated attack on a protocol. The firm has a corporate identity, a registered business address, audited financial statements, employment relationships, banking relationships, supplier contracts, and regulatory filings. It is identifiable, locatable, and subject to legal process. A successful attack that became known to be its work would expose it to direct legal liability under fraud statutes, market manipulation rules, and possibly criminal sanctions. It would expose its directors and officers to personal liability under fiduciary duty, and to professional sanctions where applicable. It would expose its commercial counterparts to adverse selection and reputational contagion, prompting them to terminate relationships. It would expose its banking arrangements to closure under suspicious activity reporting. It would expose its regulatory standing to revocation in jurisdictions where it operates as a regulated entity.
The aggregate R for a publicly identified institutional participant in a market-manipulation attack is not bounded by the value extracted from the attack. It can substantially exceed that value, because legal and regulatory consequences are often disproportionate to the gain. A firm that gains $10 million through an attack and faces fines, civil penalties, loss of licences, and shareholder lawsuits totalling $100 million is rationally deterred even if C is low and I is moderate.
The same applies, with adjustments, to professional staking services, exchanges, custodians, and other identifiable institutional actors. They operate under regulatory frameworks that apply real penalties. They have professional reputations that have value across multiple markets. They have institutional relationships that depend on integrity. R for these actors is large.
For pseudonymous individual actors, R is smaller but not zero. The realities of cross-border enforcement, blockchain analysis, and informal information sharing among regulators have meant that pseudonymous attackers operating at scale are often identified, sometimes years after the fact. The expected value of R for such actors is not zero; it is the probability of identification multiplied by the cost conditional on identification, both of which have grown substantially over the past decade.
The R term has a critical structural property: it depends on identification. Where the participants in an attack can be identified, R is potentially large. Where they cannot, R is small. This means that R varies with the institutional structure of the protocol — specifically, with whether the actors who hold consensus capacity or governance authority are identifiable as legal persons. A protocol whose consensus capacity is held entirely by anonymous actors has lower R than a protocol whose consensus capacity is held by identifiable institutional participants, holding all else equal. This is not a rhetorical claim about decentralisation. It is a statement about which deterrents apply to whom.
The interaction between R and governance structure is particularly important. In a protocol where rule-change authority is concentrated in identifiable maintainers and sponsors, opportunistic rule revision exposes those parties to legal and reputational consequences in ways that anonymous consensus subversion does not. Identifiable parties exercising authority face fiduciary-like duties, exposure to misrepresentation claims, and potentially partnership or trust-based liabilities depending on the legal framework that applies. The argument here is not that this analysis has been definitively settled — much of it is contested, and the legal frameworks vary across jurisdictions — but that the existence of identifiable parties exercising authority is itself a security feature, not just a centralisation concern. R is real where identification is real.
7. Putting the four terms together
The institutional security condition is not the sum of four independent quantities. The terms interact, and the interactions matter for which attacks are deterred and which are not.
C and I are partial substitutes. An attacker with high I faces a constraint even if C is low; an attacker with low I is constrained only by C. A system whose consensus capacity is held by actors with high I can afford lower C without becoming insecure. This is part of why long-established systems can sometimes operate with consensus security parameters that would be inadequate in a younger system: the older system has accumulated a population of operators with substantial I, which compensates for any decline in C.
K and R are partial substitutes for similar reasons. High K deters attacks that require coordination; high R deters attacks by identifiable parties. A system can be secure against coordinated attacks either through dispersing capacity to raise K, or through ensuring that coordinating parties are identifiable enough to face R. Different combinations work in different settings.
C and K are partial substitutes too. An attacker who can acquire enough capacity unilaterally faces only C; an attacker who must coordinate faces both C and K. Concentrating capacity in fewer hands lowers K but does not necessarily lower C if the concentrated holder is well-resourced. The relevant security comparison depends on whether the hypothetical attacker is unilateral or coordinated, and which is binding depends on the system’s actual capacity distribution.
I and R are conceptually distinct but operate similarly: both raise the cost of attack to the attacker independently of the cost of acquiring capacity. The difference is that I operates through ownership stakes, while R operates through legal and reputational mechanisms. Both can be substantial; both are typically missing from standard security analyses.
The general implication is that two systems with identical C can have very different institutional security depending on the values of I, K, and R. Comparing systems on C alone is not just incomplete; it is potentially misleading. A system with high C but low I, low K, and low R can be less secure in expectation than a system with moderate C but high values on the other three terms.
8. Implications for measurement
If the institutional security condition has four terms, then security measurement must address all four. The standard practice — reporting hash rate, validator count, or Nakamoto coefficient as proxies for security — captures aspects of C and K but does not address I or R directly, and does not represent the interactions among them.
A more complete measurement programme would include the following.
For C: spot cost of acquiring marginal consensus capacity at the relevant attack scale; supply elasticity of capacity acquisition; detection probability during acquisition; defensive response capacity. These are extensions of existing methodology and have been partially developed in the cost-of-attack literature.
For I: the distribution of consensus capacity by holder; the value of in-system holdings, infrastructure, and reputation for each holder class; the expected loss to each holder class conditional on a successful attack. This requires identifying the actors who hold capacity, classifying them by their stake in the system’s continuation, and estimating their expected loss in failure scenarios.
For K: the number of independent actors whose coordination would be required to mount a successful attack; the existing communication and coordination channels among them; the alignment or divergence of their interests; the costs of negotiating, executing, and concealing coordination. This is a structural assessment of the consensus operator population and the governance environment in which they operate.
For R: the proportion of consensus capacity held by identifiable institutional participants; the legal and regulatory frameworks under which they operate; the realistic enforcement reach of those frameworks; the reputational sensitivity of participants. This is an assessment of the legal and institutional environment in which an attack would be evaluated.
None of these measurements is trivial. Each requires methodology beyond what is currently standard, and each requires data that protocols are not always willing to disclose. But the alternative is to continue reporting C alone and treating it as if it captured something it does not. The current practice systematically overstates the security of systems with high C and low values on the other terms, and systematically understates the security of systems with moderate C and substantial I, K, and R. The mis-ranking has consequences for participant decisions and for policy.
9. The governance-attack vector
One final point deserves explicit treatment because the standard analysis cannot represent it at all. The institutional security condition addresses not only consensus attacks but also governance attacks — actions that change the rule set itself rather than subverting the rule set’s enforcement. A governance attack does not require αV < C to fail. It requires the rule-changing coalition to find that the gain from a rule revision exceeds the costs of effecting it.
The relevant cost structure for a governance attack mirrors the institutional security condition. A coalition contemplating opportunistic rule revision faces:
the technical and process cost of preparing and proposing the revision;
the capital at risk from the revision’s effect on the value of the coalition members’ own holdings;
the coordination cost of forming and sustaining the coalition through to activation;
the legal, regulatory, and reputational consequences for identifiable coalition members.
This is the same four-term structure, applied to a different kind of attack. A protocol’s institutional security against governance attacks is high when these costs are high, and low when they are low. The standard consensus analysis says nothing about this, because it does not represent governance at all. The institutional analysis says something specific: governance security depends on the same four terms as consensus security, computed against the rule-changing coalition rather than the consensus operators.
This means that governance security and consensus security are conceptually unified within the institutional framework, and that comparing systems on either dimension alone is incomplete. A system can be strong on consensus security and weak on governance security; another system can be the reverse; a third can be strong on both; a fourth can be weak on both. The four-quadrant structure is not exotic. It is what the analysis produces once the four terms are taken seriously.
10. Objections
“Capital at risk and accountability are soft factors that cannot be quantified, so they cannot be part of a security condition.” Soft is not the same as unquantifiable. Capital at risk is the expected value of holdings impaired by attack, which is a calculation. Accountability is the expected legal and reputational cost given identification, which is also a calculation. The calculations are harder than the cost of buying ASIC capacity, but harder is not impossible. Where rigorous quantification is unavailable, ranges and bounds are still informative, and informative imprecise measurement is preferable to precise measurement of the wrong thing.
“Anonymity is a feature, not a bug; raising R by requiring identification undermines what protocols are for.” The argument is not that protocols should require identification of all participants. The argument is that R is part of security, and where participants are identifiable, R is part of what protects the system. Anonymous systems are not insecure on this criterion alone; they are simply systems that derive their security from C, I, and K rather than from R. The choice has trade-offs, and the institutional analysis makes them visible. It does not prejudge which choice is correct.
“Coordination cost is endogenous to the protocol’s incentive structure and is captured by existing equilibrium analyses.” Sometimes, partially. Standard equilibrium analyses of consensus participation address some aspects of K, particularly miner or validator coordination on equilibrium strategies. They generally do not address coordination across the broader set of actors required for governance attacks, including developers, sponsors, and adoption-critical infrastructure. The K term in the institutional condition is broader than the coordination cost in standard consensus models.
“The standard condition is sufficient because attacks empirically have not occurred at scale.” Empirical attack frequency is partly evidence about security and partly evidence about the costs the standard condition does not measure. The reason large operational protocols have not experienced catastrophic 51% attacks is not solely that C is high; it is also that the actors with capacity to mount such attacks have substantial I, face high K for coordinated action, and are subject to R through identification. The empirical record is consistent with the institutional condition holding, not with the standard condition being sufficient.
“This makes security analysis impossibly complex; the standard condition is at least tractable.” The standard condition is tractable because it is incomplete. Adding terms makes the analysis harder; it does not make the analysis wrong. Tractability is not a substitute for correctness, and the field has tools — institutional economics, transaction cost analysis, legal economic analysis — that are equipped to handle the additional terms. The complexity is a feature of the problem, not an artefact of the analysis.
11. Closing
Protocol security is not the cost of mounting a 51% attack. It is the cost of violating the protocol commitment. The two are different objects, and conflating them produces measurements that systematically misrank systems on the criterion that matters to participants. The standard condition αV < C captures consensus cost. It does not capture capital at risk, coordination cost, or accountability. The institutional condition αV < C + I + K + R captures all four.
The expansion is not a refinement of the standard analysis. It is a different framework, derived from the recognition that protocols are institutional rule systems and that participants invest under those rules expecting them to hold. What threatens the value of those investments is not solely external attack on the consensus process; it is anything that changes the terms under which the investments have economic meaning. The set of relevant threats includes consensus subversion, governance opportunism, regulatory destabilisation, and coordinated rule revision. The standard analysis addresses one of these. The institutional analysis addresses all of them within a single framework.
What follows for measurement is that security reporting needs to expand. Hash rate is not a sufficient statistic. Validator count is not a sufficient statistic. Nakamoto coefficient is not a sufficient statistic. None of these captures I or R, and most capture K only partially. Reporting them as if they answered the security question gives participants the impression that they have the information they need, when in fact they have one of four terms.
What follows for design is that protocol architects who care about security have more levers than the consensus mechanism. They can structure consensus capacity ownership to raise I. They can disperse authority to raise K. They can build identification into the governance layer to raise R, where the design objectives permit it. They can also choose explicit fixedness to remove governance-attack vectors entirely, accepting the loss of option value as the price of the gain in commitment. Each of these is a security choice, and each operates through a term that the standard analysis does not represent.
What follows for participants is that evaluation of protocol security requires looking at all four terms, not just the one that is easiest to measure. A system with high C but anonymous operators with low I and low R may be less secure than a system with moderate C but identifiable institutional operators with high I and high R. The comparison is not obvious from the standard reporting; it requires the institutional analysis. Participants who are making asset-specific investments need the institutional analysis whether or not the field has caught up with providing it.
The basic claim of this essay can be stated as a single inequality. Security is not αV < C. Security is αV < C + I + K + R. The first is a special case of the second, valid when I, K, and R are zero. They are rarely zero in operational systems. The general condition is the relevant one, and any analysis that uses the special case as if it were general is doing something other than measuring the security of the system as participants experience it. The economic security of a protocol is the cost of violating its commitment, computed across all the mechanisms that enforce that commitment. The consensus mechanism is one of those mechanisms. It is not the only one, and in many systems it is not the most important.
References mentioned in passing: E. Budish, “The Economic Limits of Bitcoin and the Blockchain,” NBER Working Paper 24717 (2018); R. H. Coase, “The Problem of Social Cost,” Journal of Law and Economics (1960); A. O. Hirschman, Exit, Voice, and Loyalty (1970); D. C. North, Institutions, Institutional Change and Economic Performance (1990); O. E. Williamson, The Economic Institutions of Capitalism (1985).



This will be useful once the state starts using Bitcoin to save the dollar and housing market(same thing in the end - mortgages being the fundamental)
When you use the term 'capital' do you mean wealth devoted to the production of more wealth. Or do you mean that plus economic rent? Important because get this wrong and all estimates are meaningless in spite of excellent study, analysis and perfect calculation.
Surely DAR is a cost factor and risk analysis for an attacker. Any assets moved in the attack must be tainted and are subject to recovery and loss to the attacker. I'm assuming you have rolled it into an umbrella already mentioned?