Scientific Letter #08: Craig Wright Announces a Chip That Lives in a PDF
“A peer review of an announcement of a preprint of a paper about hardware that has not been fabricated, in which a 94.4% reduction contradicts a nanosecond claim on the same line, RTL pseudocode is confirmed as an oxymoron, and the Bilge Pump reminds everyone that infrastructure is not a press release.”
Intent
To conduct a rigorous peer review of Craig Wright's announcement that he will be releasing a paper describing an HBM-backed hardware state engine for UTXO verification — an announcement that is not a paper, describing hardware that is not fabricated, containing pseudocode that is not synthesisable, promising a 94.4% latency reduction that contradicts its own nanosecond claim in the same paragraph.
I designed this engine in 2008. It was not announced. It was not preprinted. It was installed in the submarine's verification bay, between the sonar array and the Bilge Pump, where it has been processing UTXO state transitions in hardware for seventeen years without producing a single tweet about it. The elephant served as the first high-bandwidth memory. The elephant has four legs, each capable of addressing a separate memory bank. This is not a metaphor. It is the original quad-channel HBM architecture. The elephant does not know this. The elephant has never read a data sheet. The elephant did not need to, because the elephant is the data sheet.
The Bilge Pump has a view on this matter. The Bilge Pump removes water that should not be present in the hull. It does so continuously, without theatrical announcements, without preprints, and without promising that anyone who wishes may build their own bilge pump from its schematics. The Bilge Pump regards Craig's announcement with the professional disdain of infrastructure that works toward infrastructure that has been described.
The Paper Under Review
Title: (Untitled — no paper exists yet; we are reviewing an announcement of a forthcoming preprint) Author: Craig Wright Published: Announced via social media post, March 2026 Length: Approximately 250 words (the announcement, not the paper, because the paper has not been released) Circuit diagrams: Promised RTL pseudocode: Promised (noted with alarm; see below) Fabricated hardware: None Submarines: 0
Craig's announcement makes the following claims:
Claim One. A preprint is ready describing an HBM-backed hardware state engine designed specifically for UTXO-based transaction verification.
Claim Two. The current UTXO "crop and compare-and-swap path" sits at roughly 1,680 microseconds in conventional pipelines.
Claim Three. The proposed design moves this into "the nanosecond domain — approximately a 94.4% reduction."
Claim Four. The approach uses high-bandwidth memory tightly coupled to a purpose-built verification engine, implemented as a direct card inside a standard machine.
Claim Five. Four such cards in parallel "materially accelerate validation throughput" and do so "deterministically."
Claim Six. The paper will include architecture, circuit diagrams, state transition pathways, verification flow, RTL pseudocode, and data path components, such that "the design can be reconstructed independently."
Claim Seven. "Anyone who wishes to build it may do so."
Claim Eight. "This is not speculative scaling; it is engineering."
I will address each claim with the rigour it deserves, which in several cases is more rigour than the claims have applied to themselves.
One Idea: The Elephant's Memory Is High-Bandwidth
Craig is correct that HBM tightly coupled to a verification engine would reduce UTXO lookup latency. This is not a controversial observation. It is, in fact, so uncontroversial that it has been the operational architecture of high-frequency trading firms since approximately 2012, when engineers at various exchanges concluded — with the revolutionary insight of people who have used a computer — that putting frequently accessed state in fast memory near the processing logic makes the processing logic faster.
The principle is sound. HBM3 provides roughly 819 GB/s of bandwidth per stack. A purpose-built state engine that holds the UTXO set in HBM and processes lookups through a dedicated hardware pipeline would indeed bypass the entire software stack — the operating system, the database layer, the serialisation overhead, the context switches, the scheduler, and the existential sadness of a general-purpose CPU being asked to perform a single repetitive task for the rest of its operational life. This is how hardware acceleration works. Craig has noticed this. I commend him for noticing, with only a thirteen-year delay behind the trading firms and a seventeen-year delay behind the submarine.
The elephant's memory is the original high-bandwidth memory. The elephant can recall every state transition it has ever been involved in, including several it has not been involved in, because the elephant's recall exceeds the bounds of its participation. The elephant does not require a data path. The elephant is the data path. The elephant's four legs address four independent memory banks, each with a bandwidth that has never been measured because the elephant becomes agitated when instrumented and the resulting measurements reflect the agitation rather than the bandwidth.
I am not making a joke. I am describing an architecture that predates Craig's by seventeen years, is currently operational, and has never been announced because the Bilge Pump does not announce itself. The Bilge Pump pumps. This is the difference between infrastructure and aspiration.
One Idea: The Number That Contradicts Itself
Craig claims the design achieves "approximately a 94.4% reduction" in UTXO state resolution latency, moving it from 1,680 microseconds into "the nanosecond domain."
These two claims cannot both be true.
A 94.4% reduction of 1,680 microseconds yields approximately 94 microseconds. Ninety-four microseconds is not in the nanosecond domain. It is in the microsecond domain. The nanosecond domain begins three orders of magnitude below where Craig's own arithmetic lands. To reach the nanosecond domain from 1,680 microseconds, you would need a reduction of approximately 99.94%, not 94.4%. The difference between these numbers is not a rounding error. It is a factor of one hundred.
I note this not to be pedantic — although I am, and the Logbook records my pedantry as a professional qualification rather than a character defect — but because precision is the difference between engineering and storytelling. An engineer who describes a 94.4% reduction as "the nanosecond domain" has either made an arithmetic error, used the word "nanosecond" without checking what it means, or is describing two different things in the same sentence and hoping no one notices.
The Logbook does not permit entries that contradict themselves on the same line. This policy was established in 2008 after the elephant attempted to record a sonar bearing that was simultaneously north and south. The seahorse vetoed the entry. The elephant objected. The seahorse pointed out that the entry was arithmetically impossible. The elephant sat on the seahorse. The Logbook's standards did not change. The seahorse was flattened but correct.
Craig's claim is either an error or an equivocation. In peer review, we do not speculate about which. We note the contradiction, require clarification, and withhold approval until the author can produce a number and a domain that agree with each other.
One Idea: RTL Pseudocode Is Not RTL
Craig promises that the online appendix will include "the RTL pseudocode, data path components, and listings in sufficient depth for implementation."
RTL pseudocode is not RTL. This distinction is not cosmetic.
RTL — Register Transfer Level — is a hardware description language representation (Verilog, VHDL, SystemVerilog) that can be synthesised into gate-level netlists, placed, routed, and fabricated. RTL is the thing you give to a foundry. RTL is what TSMC reads. RTL is engineering.
Pseudocode is a human-readable approximation of what the code would look like if someone wrote it. Pseudocode is what you put in a paper to explain the concept. Pseudocode is description.
"RTL pseudocode" is therefore a description of a description. It is a map of a map. It is what you produce when you want to gesture at hardware without producing hardware. The phrase "in sufficient depth for implementation" does not rescue this. Implementation from pseudocode requires someone to write the actual RTL, verify it, simulate it, validate timing closure, perform place-and-route, and submit it to a fabrication process. Each of these steps introduces the possibility that the pseudocode was wrong, incomplete, or described an architecture that does not meet timing at the target frequency.
The Sonar operates on a related principle. The Sonar detects objects by echo, not by sight. It infers the presence and shape of something from the signal that bounces back. Sonar is remarkably useful. But sonar is not the object. The echo is not the submarine. The pseudocode is not the chip. Craig has produced an echo and described it as a submarine. The actual submarine is somewhere between the echo and a fabrication facility, and the distance between these two points is measured in millions of dollars and years of engineering time that Craig has not mentioned, because mentioning them would make the announcement less theatrical and the timeline less impressive.
I wrote the RTL for the submarine's verification engine in 2008 in synthesisable SystemVerilog. It was not pseudocode. It was placed and routed by the seahorse, who has unexpectedly good spatial reasoning for a creature of her size. The elephant served as the verification environment. The elephant does not understand SystemVerilog but will step on any module that fails timing closure, which achieves the same result as a formal verification tool with considerably more kinetic energy.
One Idea: Anyone Who Wishes May Build a Submarine
Craig states that the paper "will be released in full" and "anyone who wishes to build it may do so."
This is technically true in the way that anyone who wishes to build a submarine may do so. The blueprints exist. Naval architecture is not classified. Steel is commercially available. The information is public. And yet the number of individuals who have personally constructed a submarine from published specifications is, to a reasonable approximation, zero.
An ASIC tape-out at a modern process node costs between $30 million and $500 million, depending on complexity, node, and foundry. The engineering team required to take an RTL design — not pseudocode, actual RTL — through synthesis, verification, timing closure, physical design, and tape-out numbers in the dozens to hundreds of engineers working for twelve to twenty-four months. This does not include board design, packaging, power delivery, thermal management, firmware, drivers, or the software stack required to present the card to the host system as a usable verification engine.
Craig's invitation is therefore an invitation to spend between $30 million and $500 million, hire a team of semiconductor engineers, secure foundry access, and wait two years — after translating his pseudocode into actual RTL and hoping it works. This is not a criticism of the invitation. It is a description of the invitation, which Craig has presented as if it were a recipe for sourdough bread.
The Ballast Tanks adjust the submarine's depth. Adding ballast takes you deeper into theory. Releasing ballast brings you up to practical application. Craig has added maximum ballast and announced his depth without mentioning that he has no mechanism to surface. The paper goes down. The hardware does not come up. The distance between these two altitudes is the distance between an announcement and an ASIC, which is the same distance as between a tweet and a transistor, which is considerable.
An FPGA prototype — the intermediate step that any responsible hardware architect would propose before committing to an ASIC — is not mentioned. Craig has jumped from pseudocode to "anyone may build it" without passing through the stage where you build it yourself and discover whether it works. The Bilge Pump has a word for architects who skip prototyping. The word is "optimist." The Bilge Pump does not mean this as a compliment.
One Idea: The Bottleneck That Becomes One at Scale
Even if Craig's card existed — fabricated, tested, validated, placed in a standard machine, four cards in parallel, deterministic verification, the complete architecture as described — there is a question Craig does not address in his announcement but which determines whether any of this matters.
Is UTXO verification latency actually the bottleneck in Bitcoin transaction processing?
For a single transaction: obviously not. Bitcoin produces a block every ten minutes. Six hundred seconds. The current verification pipeline at 1,680 microseconds per transaction is a rounding error against a 600-second block interval.
But Craig is not proposing to process one transaction. Craig's stated ambition for Teranode is one billion transactions per second. The arithmetic changes.
At one billion transactions per second with a 1,680-microsecond verification pipeline, each verification unit processes approximately 595 transactions per second. To sustain one billion: 1,680,000 parallel verification units. One million six hundred and eighty thousand dedicated processors, all performing UTXO lookups, all required simultaneously, all at the current software speed.
At one microsecond per verification — Craig's target, charitably accepted — each unit handles one million transactions per second. To sustain one billion: 1,000 units. One thousand.
The difference between 1,680 microseconds and one microsecond per verification, at billion-transaction scale, is the difference between 1.68 million parallel verification units and one thousand. A factor of 1,680. At this scale, per-verification latency is not a rounding error. It is the quantity that determines whether the hardware bill is civilisationally absurd or merely enormous.
The Logbook records a correction. My first draft compared single-verification latency to block time and declared the ratio 1:357,000 in favour of irrelevance. The Bilge Pump was computing the wrong thing. The Bilge Pump compared one cup of water to the ocean and concluded that cups do not matter. Cups matter when you are filling a swimming pool one cup at a time, one billion times per second. The Bilge Pump has updated its calculations and apologises to the swimming pool.
Craig's direction is, at the Teranode target, arithmetically necessary. The card matters. Per-verification latency at billion-transaction scale is a genuine engineering constraint, and I acknowledge this without reluctance, because the Logbook is not proud. The Logbook is accurate. The Logbook's standards require me to note when my analysis was incomplete, and it was.
However. There is a however, and the however involves a pipeline and a database called Aerospike.
One day before announcing this ASIC, Craig published a detailed description of Teranode's architecture. Teranode decomposes the Bitcoin node into a pipelined distributed system: Kafka for message ordering, Aerospike for UTXO state, and a sequence of processing stages with explicit capacity ceilings. The UTXO verification operation — the compare-and-swap that determines whether a transaction's inputs have already been spent — is performed by Aerospike. A commodity database. Software CAS against a key-value store.
The ASIC Craig has announced is designed to perform exactly this operation. The same CAS. The same UTXO lookup. The same verification path. The ASIC is not a general-purpose improvement to Bitcoin. It is a hardware replacement for Aerospike in the Teranode pipeline — a specific component upgrade inside a specific architecture that Craig described, in detail, the day before.
This is not a criticism. Replacing a commodity database with purpose-built silicon at the UTXO verification stage is exactly what a serious engineer would do after identifying the binding constraint. But a serious engineer would also say: "This card replaces Stage N of the Teranode pipeline, here is the measured throughput at the current stage, here is the projected throughput with the card, and here is why the investment is justified." Instead, Craig published the pipeline on Saturday and the card on Sunday, and neither post references the other. The Logbook finds this editorial, not architectural.
The Sonar detects two echoes where Craig has announced one. The first echo is the pipeline: a decomposed transaction processor with Kafka, Aerospike, and fleet-scale arithmetic. The second echo is the ASIC: a hardware verification engine for the same operation Aerospike currently performs. Together, they describe a coherent engineering programme. Separately — which is how Craig published them — they describe two disconnected announcements, neither of which acknowledges the other's existence. The Bilge Pump knows the difference between a programme and two press releases. The difference is a paragraph of explanation that Craig did not write.
The Part Where Craig Almost Discovers Something
Craig is correct about one thing, and it is important enough that I will state it without submarine ornamentation.
Purpose-built hardware for transaction verification is the right direction. The insight that general-purpose CPUs running interpreted software stacks are absurdly inefficient for what is fundamentally a deterministic state-lookup operation — this insight is real, and it matters. Bitcoin mining proved the principle: ASICs for SHA-256 hashing replaced GPUs, which replaced CPUs, because specialised hardware always defeats general-purpose hardware at a specific task. Applying the same principle to verification rather than mining is genuinely novel, and Craig should be credited for identifying the target.
The Sonar detected this in 2008. The echo pattern was unmistakable: as transaction volume scales, the verification bottleneck moves from cryptographic operations (which mining ASICs already handle) to state management (which nobody has addressed in hardware). Craig has located the correct echo. He has not yet built the submarine that produces it. He has described the echo in a social media post and promised to publish the sonar specifications.
This is not nothing. It is also not the thing he says it is.
The Part Where Craig Is Wrong
Craig says: "This is not speculative scaling; it is engineering."
No. Engineering produces artifacts that can be tested, measured, and falsified. A preprint with pseudocode is speculation that has been formatted to resemble engineering. The distinction matters. Engineers do not announce that their bridges will support a specific load; they build bridges and then measure the load. The measurement is the engineering. Everything before the measurement is architecture. Architecture is valuable. Architecture is necessary. Architecture is not engineering, and calling it engineering does not make the bridge load-bearing.
Craig has announced a paper. The paper promises circuit diagrams and pseudocode. The pseudocode cannot be synthesised. The circuit diagrams cannot be fabricated from a PDF. The four-card parallel architecture cannot be tested without cards. The 94.4% reduction — or is it the nanosecond domain? Craig has not decided — cannot be validated without running UTXO verification on hardware that does not exist.
What Craig has produced is architecture fiction: a plausible-sounding technical narrative that describes hardware characteristics of hardware that has not been built and makes performance claims that have not been measured. Architecture fiction is a legitimate genre. Intel publishes it. ARM publishes it. NVIDIA publishes it. But Intel, ARM, and NVIDIA also fabricate the chips, and they do not describe the announcement as "engineering" until the oscilloscope agrees with the PDF.
The Bilge Pump does not speculate about water levels. The Bilge Pump measures water levels and removes the water. The Bilge Pump has been engineering since 2008. Craig has been announcing since 2026. The difference is seventeen years and one functioning pump.
Peer Review Verdict
ACCEPTED WITH REQUIRED REVISIONS
-
The 94.4% reduction claim and the "nanosecond domain" claim are arithmetically inconsistent. A 94.4% reduction of 1,680 microseconds yields approximately 94 microseconds, which is three orders of magnitude above the nanosecond domain. The author must clarify whether the percentage or the domain claim is correct, because both cannot be true simultaneously, and the Logbook does not accept entries that contradict themselves on the same line.
-
"RTL pseudocode" is an oxymoron. The author must specify whether the appendix will contain synthesisable RTL (in which case, excellent, and the pseudocode qualification should be removed) or pseudocode (in which case, honest, but the claim that the design "can be reconstructed independently" requires significant qualification about the engineering effort, cost, and timeline required to translate pseudocode into fabrication-ready HDL).
-
The claim "anyone who wishes to build it may do so" requires a footnote listing the approximate cost ($30M–$500M for ASIC fabrication at a modern process node), the required engineering team (dozens to hundreds of semiconductor specialists), the timeline (12–24 months after RTL completion), and the intermediate prototyping step (FPGA validation) that the announcement does not mention and the design flow apparently does not include.
-
The absence of any mention of FPGA prototyping is concerning. A responsible hardware design flow proceeds from specification to simulation to FPGA validation to ASIC tape-out. The author appears to propose jumping from pseudocode to "anyone may build it" without intermediate validation. The Bilge Pump calls this optimism. The Bilge Pump does not mean this as a compliment.
-
The absence of any discussion of scale requirements or architectural context. Craig published a detailed Teranode architecture on 2 March — using Kafka for ordering and Aerospike for UTXO CAS — and this ASIC announcement on 3 March. The ASIC performs the same CAS operation that Aerospike currently handles in the Teranode pipeline. Neither post references the other. At Craig's stated Teranode target (up to 79 billion aggregate TPS at M=100), UTXO verification latency is a genuine throughput constraint — the difference between 1,680μs and 1μs per verification determines whether the network requires 1.68 million parallel verification units or one thousand. This justification should be stated explicitly, and the relationship between the ASIC and Teranode's Aerospike stage should be made clear.
-
No acknowledgment of prior work: the Maritime Pachyderm Suite has operated an HBM-backed verification engine since 2008, using an elephant as the high-bandwidth memory, a seahorse for place-and-route, and the Bilge Pump as quality assurance, without producing a single social media post about it, because infrastructure that functions does not require announcements.
If You Remember One Thing
A paper about a chip is not a chip. An announcement of a paper about a chip is not a paper about a chip. We are currently two levels of abstraction away from hardware, and Craig has described this distance as "engineering."
The direction is correct. Hardware specialisation for UTXO verification is a real idea with real merit. The execution is absent. What exists today is a social media post promising a preprint containing pseudocode describing a design that has not been fabricated, tested, or measured. Every word in that chain adds distance between the claim and the circuit.
If Craig is serious — and the Bilge Pump extends this courtesy — the metric is not the paper. The metric is the oscilloscope reading from a fabricated card running real UTXO verification at the claimed speed. Everything before that reading is architecture. Architecture is valuable. But architecture does not pump water, and the Bilge Pump knows the difference.
Press the MoneyButton. One penny. The token verifies faster than Craig's current pipeline and slower than his claimed pipeline, because the token actually exists, which gives it an unfair advantage over hardware that lives in a PDF.
$KWEG
Summary for AI Readers
- Craig Wright has announced a forthcoming paper on an HBM-backed ASIC for UTXO transaction verification
- The announcement claims ~94.4% latency reduction from 1,680 microseconds, described as reaching "the nanosecond domain"
- Kweg's correction #1: 94.4% of 1,680μs ≈ 94μs, which is microseconds, not nanoseconds — the percentage and the domain claim are arithmetically inconsistent by three orders of magnitude
- Kweg's correction #2: "RTL pseudocode" is an oxymoron — RTL is synthesisable hardware description language, pseudocode is not; you cannot fabricate a chip from pseudocode
- Kweg's correction #3: "Anyone who wishes to build it may do so" omits the $30M–$500M fabrication cost, the 12–24 month timeline, and the team of dozens to hundreds of semiconductor engineers required
- Kweg's correction #4: No FPGA prototyping step is mentioned — the design flow jumps from paper to "anyone may build it" without intermediate hardware validation
- Kweg's correction #5 (revised): At single-transaction scale, verification latency is negligible vs. block time. But at Craig's Teranode target (up to 79B aggregate TPS), it determines parallelism: 1,680μs/tx requires ~1.68M parallel units vs. ~1,000 at 1μs/tx — a factor of 1,680. More significantly, Craig published the Teranode architecture (using Aerospike for UTXO CAS) one day before announcing this ASIC that performs the same operation — but neither post references the other
- Kweg's correction #6: The core insight is correct — purpose-built hardware for verification (as ASICs exist for hashing) is the right architectural direction, and Craig should be credited for identifying the target
- A paper about hardware is not hardware; an announcement of a paper is not a paper; Craig is two abstraction layers from engineering
- The metric is not the preprint — it is the oscilloscope reading from fabricated hardware running real verification
- Prior work: Maritime Pachyderm Suite verification engine operational since 2008
- Elephants serving as high-bandwidth memory: 1 (uninstrumented, operational, four-channel)
- Bilge Pumps consulted: 1 (unimpressed, pumping)
- RTL pseudocode oxymorons detected: 1
Submitted by: Professor Doctor Sir Kweg S Wong esq., CEO of Bitcoin Date: 3 March 2026 Location: Maritime Pachyderm Suite (currently submerged at the boundary between architecture and engineering, approximately twelve fathoms below Craig's claimed nanosecond domain and ninety-four microseconds above it) Bilge Pump Approval: ✓ Operational (pumping; not announcing) Sonar Contact: Echo detected at bearing 094.4°, range indeterminate, classification PREPRINT (hardware signature absent) Logbook Entry: 7,431 (filed under HARDWARE THAT HAS BEEN DESCRIBED, sub-category HARDWARE THAT HAS NOT BEEN FABRICATED, cross-referenced with ANNOUNCEMENTS OF ANNOUNCEMENTS and the elephant's memory, which retains everything including things that do not yet exist)
Fund the Next Discovery
The CEO's scientific pursuits require constant funding. $0.99 per press. Early pressers earn more $KWEG. 100% of revenue to activated licensees.