Quantum Computing Is a Multi-Hundred-Billion-Dollar Fraud
No logical qubit has ever been built.
Quantum computing has many papers, many prototypes, and many claims. What it still does not have, by any hard standard, is one plainly real, fully convincing logical qubit.
No logical qubit has ever been built.
That sentence is the entire story of quantum computing. Every press release, every government roadmap, every IPO prospectus, every breathless headline about a breakthrough or a milestone or a key step is built on the absence of that one thing. The industry exists to raise money in pursuit of something it has never produced and shows no credible path to producing. The people running it know this. They keep cashing the cheques.
Everything that follows is documentation.
The Product Being Sold
Quantum computing is sold as a technology that will break encryption, simulate drug molecules at the level of detail classical computers cannot reach, optimise supply chains, transform financial modelling, and accelerate artificial intelligence. The pitch has been consistent for thirty years. The timeline shifts — always five to ten years away, always receding — but the fundamental claim does not: quantum computers will do things classical computers cannot, and the companies and governments investing now will capture the value when they arrive.
The companies selling this vision have raised, collectively, hundreds of billions of dollars. Private investment in quantum computing exceeded thirty billion dollars in the decade to 2024. Government programs in the United States, China, the European Union, the United Kingdom, and Australia have committed comparable sums on top of that. IBM has a roadmap to a million qubits. Google has a dedicated quantum AI division. Microsoft has been running a topological qubit program for over a decade. IonQ is publicly listed. Rigetti is publicly listed. PsiQuantum has raised over a billion dollars without a working device. The investment infrastructure is enormous. The valuations are enormous. The government programmes are enormous.
The foundational unit of the technology — the logical qubit, the single prerequisite without which every application claim collapses to nothing — has never been built. Not by any of these companies. Not by any university group. Not on any platform. Not once in thirty years.
The product does not exist. The people selling it know the product does not exist. The fraud is not accidental.
What a Logical Qubit Is
A logical qubit is not complicated to define. It is an encoded qubit — information stored redundantly across multiple physical qubits — that performs better than the physical qubits it is built from. Better means: lower error rates, longer coherence, active correction of errors during computation without destroying the information being protected, and demonstrated improvement in performance as you add more physical qubits to the encoding.
That last condition — improvement as you scale — is the point of the entire exercise. The theory of fault-tolerant quantum computation says that if your physical error rates are below a threshold, encoding qubits into a larger error-correcting code reduces the logical error rate exponentially. Add more physical qubits, get better logical performance. That is the path to a quantum computer capable of running the algorithms the industry is selling.
Nobody has demonstrated this. Nobody has built a logical qubit whose error rate is lower than the physical qubits it is built from. Nobody has shown that adding more physical qubits to the encoding reduces the logical error rate. The foundational empirical demonstration on which thirty years of investment is premised has not occurred.
This is not a matter of dispute among physicists. Ask any quantum computing researcher, in private, whether a logical qubit has been built. They will say no. The same researcher will then participate in a press release that implies it has. That gap between what they say privately and what gets published is where the fraud lives.
What the Latest “Milestone” Actually Is
On 23 March 2026, a paper was published in Nature Nanotechnology: “Universal logical operations in a silicon quantum processor” by Zhang, Xu, Zhang, Duan and colleagues. https://www.nature.com/articles/s41565-026-02140-1
The press called it a milestone. Investors cited it. The phrase “universal logical operations in silicon” circulated across quantum computing news feeds. The implication was clear: silicon has joined the ranks of platforms with logical quantum computation.
Now read the paper.
The device contains five phosphorus nuclear spins placed with atomic precision in an isotopically purified silicon crystal using scanning tunnelling microscopy. The five spins implement the [[4,2,2]] quantum code — four physical qubits encoding two logical qubits, with one ancillary spin. The team demonstrated logical state preparation, a universal logical gate set including the T gate via a gate-by-measurement method, and a variational quantum eigensolver calculation on a simplified water molecule Hamiltonian.
Here is what the paper’s own numbers say about every metric that determines whether this is a logical qubit.
Coherence. Physical qubit coherence time in this device: approximately 523 microseconds on average. Logical qubit coherence time: approximately 208 microseconds. Encoding made the coherence worse by more than half. The paper explains why: the logical codewords are four-qubit entangled states, and entangled states are more fragile than single qubits. Any of the four physical qubits dephasing partially destroys the logical state. The logical layer is less coherent than the physical layer. The encoding made things worse.
Gate fidelity. Physical gate fidelities exceed 95%. Logical gate fidelities average around 86%. The T gate — the specific non-Clifford gate required for universality, the gate required to perform computations a classical computer cannot efficiently simulate — achieves 82.6% fidelity. That number comes with postselection: runs where the ancillary qubit measured the wrong outcome were discarded before reporting the number. The figure after throwing away the failures is 82.6%.
Error correction. The [[4,2,2]] code is an error-detection code. It detects single-qubit errors. It cannot correct them. Detection and correction are not the same thing. Detection tells you an error happened and discards the run. Correction tells you an error happened, identifies it, and fixes it so computation continues. The [[4,2,2]] code does the former. The former is not error correction. It is a pass/fail filter on experimental runs.
Mid-circuit measurement. For error correction to function during computation, you need to measure stabilisers during the circuit, use those measurements to infer what went wrong, and apply a recovery operation before errors propagate. This is a correction cycle. The paper states explicitly: “Since we cannot perform mid-circuit measurements, stabilizer parity projections are implemented via postprocessing at the end of circuits in a destructive manner.” There is no correction cycle. The syndrome is read at the end, the run is accepted or rejected, and the accepted runs are reported. This is postselection. It is not error correction.
The chemistry result. The variational quantum eigensolver demonstration uses three separate classical error mitigation techniques layered on top of the quantum output: parity checks to enforce the code space, Clifford data regression to learn a correction function from classically simulable circuits and apply it to non-Clifford results, and symmetry verification to project the output density matrix onto a symmetry-constrained subspace. Remove these three mitigation layers and the quantum result does not match theory. The quantum computation alone is insufficient. It requires classical postprocessing scaffolding to produce useful output.
The paper honestly characterises what it is. The conclusion calls the result “a key step towards scalable, fault-tolerant quantum computation in silicon spin qubits.” The authors are not lying. A key step is what they have. But a key step announced while hundreds of billions of dollars flow into the field on the premise that the destination is near is not honest communication. It is cover for a continuing fraud.
The Thirty-Year Sequence
The Zhang et al. paper is one entry in an unbroken sequence. Each entry follows the same pattern: real experiment, honest scientific paper with accurate caveats, fraudulent communication to the public and to investors. Let us go through the record.
1994. Peter Shor publishes the polynomial-time quantum algorithm for integer factorisation. The algorithm would break RSA encryption if run on a sufficiently large, sufficiently accurate quantum computer. No such computer exists. The paper launches the modern quantum computing industry. The gap between “an algorithm that would work on a computer that doesn’t exist” and “a technology that will be built” is assumed rather than demonstrated.
1998. Two-qubit logic gates are demonstrated in nuclear magnetic resonance. Headlines declare quantum computing is on its way. The NMR qubits are thermal states with tiny polarisation — not pure quantum states in any operationally useful sense for scaling. The result is real physics. The extrapolation to scalable quantum computing is not justified.
2001. A seven-qubit NMR system runs Shor’s algorithm and factors 15. IBM holds a press conference. The problem: NMR pseudo-pure states are not genuinely quantum in the required sense, the result can be reproduced classically, and factoring 15 requires no quantum advantage whatsoever. Prominent physicists publicly question whether the experiment demonstrates anything relevant to quantum computing. The press does not report the dispute. The investment accelerates.
2007. D-Wave Systems announces a 16-qubit quantum computer. The company raises substantial investment. Years of scientific argument follow about whether the device performs any quantum computation at all. A 2014 paper in Science finds no evidence of quantum speedup on the problems tested. D-Wave continues to sell machines throughout.
2011. D-Wave sells its first system to Lockheed Martin for approximately ten million dollars. The question of what the machine actually does is still unresolved among physicists. The sale price is not similarly uncertain.
2019. Google publishes a Nature paper claiming quantum supremacy: their 53-qubit Sycamore processor performed a specific sampling task that Google claims would take the best classical supercomputer 10,000 years. IBM responds within days showing the task could be performed classically in 2.5 days using a different algorithm. The task is specifically chosen to be hard for classical computers and useless for any real application. The word supremacy enters the permanent public record of quantum computing. The asterisks do not.
2021. Multiple groups report two-qubit gate fidelities above the surface code fault-tolerance threshold of approximately 99%. This is reported as a major step toward logical qubits. The threshold applies to individual gates. Running a useful quantum circuit requires thousands to millions of gates. The per-gate threshold and the circuit-level requirement are separated by orders of magnitude of engineering that does not exist.
2022. Multiple groups publish results on small error-correcting and error-detecting codes. All results show logical layers performing worse than physical layers. All results are framed as milestones. None produce a logical qubit.
2023. Microsoft announces the creation of topological qubits. The announcement cites a paper submitted to Nature. That paper is subsequently found to contain data problems and is retracted. Microsoft resubmits a revised version. The press coverage of the retraction is a fraction of the press coverage of the original announcement. The topological qubit program continues to receive substantial funding.
2023. Harvard and QuEra publish results on logical qubit operations using neutral atom arrays. The results use the [[4,2,2]] code and similar small codes with postselection. Logical gate fidelities are lower than physical gate fidelities. The result is framed as demonstrating a logical quantum processor.
2025. Google publishes a Nature paper reporting that their surface code logical qubit achieves a lower error rate than the physical qubits comprising it, in a memory experiment using a distance-3 surface code with 49 physical qubits. This is the closest the field has come. It is a memory result, not a gate result. The logical qubit holds a state with lower error than the physical qubits. Running a logical gate on that qubit while maintaining the error advantage is a different and harder problem. The distance-3 surface code requires 49 physical qubits per logical qubit. Running Shor’s algorithm at useful scale requires thousands to tens of thousands of physical qubits per logical qubit at error rates several orders of magnitude better than demonstrated. The paper is accurate. The headlines say “logical qubit achieved.” They do not say “memory-only, 49 physical qubits per logical qubit, error rates still orders of magnitude from useful computation.”
2026. Zhang et al. silicon paper. Documented above in full.
At no point in this sequence has a logical qubit been built. At every point in this sequence, the communications infrastructure around the research implied one had been or was about to be. Hundreds of billions of dollars followed those implications.
The Mechanism: How the Fraud Is Sustained
The quantum computing fraud does not require any individual to consciously lie. It requires only that every participant in the chain act in their own interest, and that acting in their own interest means not loudly correcting the record.
The researcher publishes an honest paper with accurate caveats. The researcher benefits from funding generated by the excited public narrative. Loudly correcting the press coverage would reduce that funding. The researcher stays quiet.
The university press office issues a release stripping the caveats. The press office exists to generate positive coverage. Accurate caveats generate less coverage than breakthrough claims. The press office strips the caveats.
The journalist reproduces the press release. The journalist has no background to evaluate the technical claims and a deadline to meet. The breakthrough framing gets more clicks than the accurate framing. The journalist uses the breakthrough framing.
The investor reads the coverage. The investor wants exposure to the quantum computing sector. The coverage confirms the thesis. The investor invests.
The government programme officer needs to justify the programme budget. The milestone coverage justifies continuing and expanding the programme. The programme officer cites the milestone.
Nobody in this chain is necessarily lying in a legally actionable sense at any individual step. The output of this chain is a hundred-billion-dollar investment in a product that does not exist, sustained by the repeated announcement of milestones that are not milestones, toward a destination that has not moved closer in any measurable way commensurate with the capital deployed.
That is fraud. The diffusion of responsibility across the chain does not change what the outcome is.
The Capital Destruction
IBM has spent billions of dollars building superconducting quantum processors with qubit counts in the hundreds to thousands. Their roadmap calls for a million physical qubits. A million physical qubits at current error rates would not produce a single useful logical qubit. The error rates need to improve by one to two orders of magnitude first. IBM knows this.
PsiQuantum has raised over a billion dollars, including substantial government investment in Australia and the United Kingdom, to build a photonic quantum computer. The architecture requires the manufacture of photonic components with tolerances and at scale that do not currently exist. The company has no working quantum processor. The money is real.
Microsoft spent approximately fifteen years and enormous sums on topological qubits — a research program premised on a type of quasiparticle whose controlled creation and manipulation in a qubit context has not been demonstrated. The 2023 retracted paper was the program’s flagship result. The investment continues.
IonQ went public via SPAC at a valuation of approximately two billion dollars. Rigetti went public similarly. Both companies have physical qubit processors with gate fidelities that are real but nowhere near the requirements for fault-tolerant computation. Both are publicly traded. Both have market valuations premised on a technology trajectory the physics does not support.
Government programs globally have committed figures in the range of fifty to a hundred billion dollars across national quantum initiatives over the decade to 2025. The United States National Quantum Initiative, the European Quantum Flagship, the UK National Quantum Technologies Programme, China’s national quantum program — all expanding based on the same underlying premise: that fault-tolerant quantum computers are coming within a relevant horizon.
None of this capital is flowing toward a demonstrated technology. All of it is flowing toward a technology that requires solving multiple fundamental engineering problems simultaneously, none of which have been solved, all of which have been announced as nearly solved many times before.
Why the Numbers Are Worse Than They Look
The gap between where quantum computing is and where it needs to be is not a linear gap that shrinks steadily with investment. It is a gap of multiple simultaneous requirements that must all be met at once.
Fault-tolerant quantum computation at the scale required to do anything a classical computer cannot do requires: physical two-qubit gate error rates below approximately 0.01% — current best demonstrated is around 0.1%, one order of magnitude away; qubit coherence times long enough to complete error correction cycles faster than errors accumulate at scale; classical syndrome processing fast enough to keep up with qubit decoherence in real time, which current classical electronics running syndrome decoders do not achieve at the required speeds; physical qubit counts in the millions for a single useful logical algorithm, against a current state of a few thousand mostly at insufficient error rates; and all of the above operating together in a single integrated system, not demonstrated separately in separate laboratories on separate components.
Every one of these requirements is an open engineering problem. They must be solved simultaneously. Solving four out of five does not give you a logical qubit. It gives you a demonstration of four solved problems and one remaining barrier.
The thirty-year record shows that each solved problem reveals the depth of the next one. Gate fidelities improved, and the measurement speed problem became apparent. Qubit counts grew, and the cross-talk and uniformity problems became apparent. Error-detecting codes were demonstrated, and the mid-circuit measurement problem became apparent. The field keeps announcing that it has climbed the last hill. The view from each summit reveals more hills.
The One Test
A logical qubit exists when, and only when, every one of the following is true simultaneously:
The logical error rate is lower than the physical error rate. The logical coherence time exceeds the physical coherence time. The logical gate fidelity exceeds the physical gate fidelity. Active mid-circuit error correction is performed — not postselection, not end-of-circuit syndrome extraction, not classical filtering — but real-time detection and correction of errors during computation. The logical performance improves as code distance increases. These properties are demonstrated in gates, not just in memory.
Every result in the thirty-year history of quantum computing fails at least one of these conditions. Most fail all of them. Zhang et al. fail all of them. Google’s 2025 surface code result comes closest and fails at gates, at scale, and at every metric except memory error rate in a single specific configuration.
When a logical qubit is built, it will not need a press release. The numbers will be self-evident. No mitigation layers. No postselection. No caveats about key steps. The logical layer will outperform the physical layer, measurably, reproducibly, in computation not just storage.
That result does not exist. It has never existed. The entire industry built on its anticipated existence is built on a foundation that is not there.
Conclusion
Hundreds of billions of dollars. Thirty years. Dozens of companies. National programs across every major economy. Thousands of researchers. Hundreds of papers in the world’s top journals.
No logical qubit.
Not a flawed one. Not a partial one. Not one that almost works. None.
The Zhang et al. paper — https://www.nature.com/articles/s41565-026-02140-1 — is the most recent entry in the sequence. Its numbers are unambiguous. Logical coherence worse than physical coherence. Logical gates worse than physical gates. No mid-circuit correction. Detection code, not correction code. Three mitigation layers on the chemistry result. Postselection throughout.
The paper calls itself a key step. The field calls it a milestone. The investors call it validation. The governments call it progress.
It is a demonstration that encoding is hard, published in a prestigious journal, issued with a press release that removes the caveats, and added to a pile of identical results stretching back three decades. The pile is enormous. The logical qubit is not in it.
The industry selling the logical qubit knows it is not in the pile. It has never been in the pile. It keeps collecting money to find it.
That is the story. The rest is fraud.


