Author: Justin Thaler
Compiled by: Plain Talk Blockchain
Original Title: How Big Is the Current Threat of Quantum Computing to Blockchain?
Timelines for cryptographically relevant quantum computers are often exaggerated—leading to calls for an urgent, comprehensive transition to post-quantum cryptography.
But these calls often overlook the costs and risks of premature migration and ignore the vastly different risk profiles among different cryptographic primitives:
Post-quantum encryption, despite its costs, demands immediate deployment: "Harvest-Now-Decrypt-Later" (HNDL) attacks are already underway, because sensitive data encrypted today will still be valuable when quantum computers arrive, even if that is decades from now. The performance overhead and implementation risks of post-quantum encryption are real, but HNDL attacks leave data requiring long-term confidentiality with no choice.
Post-quantum signatures face different considerations. They are not vulnerable to HNDL attacks, and their costs and risks (larger sizes, performance overhead, implementation immaturity, and errors) call for deliberation rather than immediate migration.
These distinctions are crucial. Misunderstanding them distorts cost-benefit analyses, leading teams to overlook more prominent security risks—such as bugs.
The real challenge of a successful transition to post-quantum cryptography lies in matching urgency to the actual threat. Below, I will clarify common misconceptions about the quantum threat to cryptography—covering encryption, signatures, and zero-knowledge proofs—with a particular focus on their impact on blockchain.
How Far Along Is Our Timeline?
Despite high-profile claims, the likelihood of a cryptographically relevant quantum computer (CRQC) emerging in the 2020s is extremely low.
By "cryptographically relevant quantum computer," I mean a fault-tolerant, error-corrected quantum computer capable of running Shor's algorithm at sufficient scale to crack {secp}256{k}1 or RSA-2048 within a reasonable timeframe (e.g., within at most a month of sustained computation).
By any reasonable reading of public milestones and resource estimates, we are far from a cryptographically relevant quantum computer. Companies sometimes claim CRQCs might appear before 2030 or as early as 2035, but publicly known progress does not support these claims.
For context, across all current architectures—trapped ions, superconducting qubits, and neutral atom systems—today's quantum computing platforms are nowhere near the hundreds of thousands to millions of physical qubits required (depending on error rates and error-correction schemes) to run Shor's algorithm against RSA-2048 or {secp}256{k}1.
The limiting factors are not just qubit count, but also gate fidelity, qubit connectivity, and the sustained error-correction circuit depth required to run deep quantum algorithms. While some systems now exceed 1,000 physical qubits, the raw qubit count itself is misleading: these systems lack the qubit connectivity and gate fidelity required for cryptographically relevant computation.
Recent systems are approaching the physical error rates where quantum error correction begins to work, but no one has demonstrated more than a handful of logical qubits with sustained error-correction circuit depth... let alone the thousands of high-fidelity, deep-circuit, fault-tolerant logical qubits needed to actually run Shor's algorithm. The gap between demonstrating that quantum error correction works in principle and achieving the scale needed for cryptanalysis remains vast.
In short: unless both qubit counts and fidelities improve by orders of magnitude, cryptographically relevant quantum computers remain a distant prospect.
However, corporate press releases and media coverage can easily create confusion. Here are some common misconceptions and sources of confusion, including:
Demonstrations claiming "quantum advantage," currently targeting artificially designed tasks. These tasks are chosen not for their utility but because they can run on existing hardware while appearing to show large quantum speedups—a fact often glossed over in announcements.
Companies claiming to have achieved thousands of physical qubits. But this refers to quantum annealers, not the gate-model machines needed to run Shor's algorithm against public-key cryptography.
Companies liberally using the term "logical qubit." Physical qubits are noisy. As mentioned earlier, quantum algorithms require logical qubits; Shor's algorithm requires thousands. Using quantum error correction, one logical qubit can be implemented using many physical qubits—typically hundreds to thousands, depending on the error rate. But some companies have stretched this term beyond recognition. For example, a recent announcement claimed a logical qubit implemented using a distance-2 code with only two physical qubits. This is absurd: distance-2 codes can only detect errors, not correct them. True fault-tolerant logical qubits for cryptanalysis require hundreds to thousands of physical qubits each, not two.
More generally, many quantum computing roadmaps use the term "logical qubit" to refer to qubits that only support Clifford operations. These operations can be classically simulated efficiently and are therefore insufficient for running Shor's algorithm, which requires thousands of error-corrected T-gates (or more generally, non-Clifford gates).
Even if one of these roadmaps targets "thousands of logical qubits by year X," this does not mean the company expects to run Shor's algorithm to break classical cryptography in that same year X.
These practices severely distort public perception of how close we are to cryptographically relevant quantum computers, even among seasoned observers.
That said, some experts are excited about the progress. For example, Scott Aaronson recently wrote that given the "current astonishing pace of hardware development,"
I now consider it a realistic possibility that we will have a fault-tolerant quantum computer running Shor's algorithm before the next US presidential election.
But Aaronson later clarified that his statement did not mean a cryptographically relevant quantum computer: he considered even a fully fault-tolerant Shor's algorithm run factoring 15 = 3 × 5 as an achievement—and that calculation can be done faster with pencil and paper. The bar remains performing Shor's algorithm at small scale, not cryptographically relevant scale, as previous experiments factoring 15 on quantum computers used simplified circuits, not the full, fault-tolerant Shor's algorithm. And there's a reason these experiments consistently factor 15: arithmetic modulo 15 is easy, while factoring slightly larger numbers like 21 is much harder. So claims of quantum experiments factoring 21 often rely on additional hints or shortcuts.
Simply put, the expectation of a cryptographically relevant quantum computer capable of breaking RSA-2048 or {secp}256{k}1 within the next 5 years—which is crucial for practical cryptography—is not supported by publicly known progress.
Even 10 years is still ambitious. Being excited about progress is entirely consistent with a timeline well over a decade away, given how far we are from cryptographically relevant quantum computers.
What about the US government setting 2035 as the deadline for full post-quantum (PQ) migration for government systems? I see this as a reasonable timeline for completing such a massive transition. However, it is not a prediction that cryptographically relevant quantum computers will exist by then.
What Scenarios Do HNDL Attacks Apply To (And Not)?
Harvest-Now-Decrypt-Later (HNDL) attacks refer to adversaries storing encrypted traffic now and decrypting it later when cryptographically relevant quantum computers exist. Nation-state-level adversaries are certainly archiving encrypted communications from the US government en masse, to decrypt them years later if and when CRQCs do exist.
This is why encryption requires immediate transition—at least for anyone with a 10-50+ year confidentiality requirement.
But digital signatures—which all blockchains rely on—are different from encryption: there is no confidentiality to attack retroactively.
In other words, if a cryptographically relevant quantum computer arrives, signature forgery does become possible from that point forward, but past signatures are not "hiding" secrets like encrypted messages. As long as you know a digital signature was generated before the CRQC arrived, it cannot be a forgery.
This makes the transition to post-quantum digital signatures less urgent than the post-quantum transition for encryption.
Major platforms are acting accordingly: Chrome and Cloudflare have launched hybrid {X}25519+{ML-KEM} for web Transport Layer Security (TLS) encryption.
In this piece, for readability, I use encryption schemes, although strictly speaking, secure communication protocols like TLS use key exchange or key encapsulation mechanisms, not public-key encryption.
Here, **"hybrid" means using both ** a post-quantum secure scheme (i.e., ML-KEM) and an existing scheme ({X}25519), to get the combined security guarantees of both. This way, they can (hopefully) deter HNDL attacks via ML-KEM, while maintaining classical security via {X}25519 should ML-KEM prove insecure even to today's computers.
Apple's iMessage has also deployed this hybrid post-quantum encryption via its PQ3 protocol, as has Signal via its PQXDH and SPQR protocols.
In contrast, rolling out post-quantum digital signatures to critical network infrastructure is being delayed until cryptographically relevant quantum computers are truly imminent, because current post-quantum signature schemes come with performance penalties (more on this later in the piece).
zkSNARKs—zero-knowledge succinct non-interactive arguments of knowledge, key to blockchain's long-term scalability and privacy—are in a similar situation to signatures. This is because, even for non-post-quantum-secure {zkSNARKs} (which use elliptic curve cryptography, like today's non-post-quantum encryption and signature schemes), their zero-knowledge property is post-quantum secure.
The zero-knowledge property ensures that no information about the secret witness is leaked in the proof—even to a quantum adversary—so there is no confidential information to "harvest" for later decryption.
Thus, {zkSNARKs} are not vulnerable to Harvest-Now-Decrypt-Later attacks. Just like non-post-quantum signatures generated today are secure, any {zkSNARK} proof generated before a cryptographically relevant quantum computer arrives is trustworthy (i.e., the statement proven is absolutely true)—even if the {zkSNARK} uses elliptic curve cryptography. Only after a CRQC arrives could an adversary find proofs for convincing false statements.
What This Means for Blockchain
Most blockchains are not exposed to HNDL attacks:
Most non-privacy chains, like today's Bitcoin and Ethereum, primarily use non-post-quantum cryptography for transaction authorization—i.e., they use digital signatures, not encryption.
Similarly, these signatures are not an HNDL risk: "Harvest-Now-Decrypt-Later" attacks apply to encrypted data. For example, Bitcoin's blockchain is public; the quantum threat is signature forgery (deriving the private key to steal funds), not decrypting transaction data that is already public. This removes the immediate encryption urgency posed by HNDL attacks.
Unfortunately, even analyses from credible sources like the Federal Reserve have erroneously claimed Bitcoin is vulnerable to HNDL attacks, an error that exaggerates the urgency of transitioning to post-quantum cryptography.
That said, reduced urgency does not mean Bitcoin can wait: it faces different timeline pressures from the immense social coordination required to change the protocol.
The exceptions as of today are privacy chains, many of which encrypt or otherwise hide recipients and amounts. This confidentiality can be harvested now and de-anonymized retroactively once quantum computers can break elliptic curve cryptography.
For such privacy chains, the severity of the attack varies by blockchain design. For example, for Monero's ring signatures and key images (linkability tags per output used to prevent double-spends), the public ledger itself is sufficient to retroactively reconstruct the spending graph. But in other chains, the damage is more limited—see the discussion by Zcash encryption engineer and researcher Sean Bowe for details.
If it is important that users' transactions not be exposed by a cryptographically relevant quantum computer, privacy chains should transition to post-quantum primitives (or hybrid schemes) as soon as feasible. Alternatively, they should adopt architectures that avoid putting decryptable secrets on-chain.
Bitcoin's Particular Challenges: Governance + Dormant Coins
For Bitcoin in particular, two realities drive urgency to begin moving toward post-quantum digital signatures. Both are unrelated to quantum technology.
One concern is governance speed: Bitcoin changes slowly. Any contentious issue can trigger a disruptive hard fork if the community cannot agree on an appropriate solution.
Another concern is that Bitcoin's shift to post-quantum signatures cannot be a passive migration: owners must proactively migrate their coins. This means dormant, quantum-vulnerable coins cannot be protected. Some estimates put the number of quantum-vulnerable and likely dormant BTC at millions of coins, worth tens of billions of dollars at current prices (as of December 2025).
However, the quantum threat to Bitcoin will not be a sudden, overnight catastrophe... but more like a selective, gradual targeting process. Quantum computers will not crack all cryptography simultaneously—Shor's algorithm must target one public key at a time. Early quantum attacks will be extremely expensive and slow. So once quantum computers can crack a single Bitcoin signature key, attackers will selectively prey on high-value wallets.
Moreover, users who avoid address reuse and do not use Taproot addresses—which expose public keys directly on-chain—are largely protected even without protocol changes: their public keys are hidden behind hash functions until their coins are spent. When they finally broadcast a spending transaction, the public key becomes visible, and there will be a brief real-time race between the honest spender needing to get the transaction confirmed and a quantum-equipped attacker wanting to find the private key and spend the coins before the true owner's transaction finalizes. So the coins that are truly vulnerable are those whose public keys are already exposed: early Pay-to-PubKey (P2PK) outputs, reused addresses, and Taproot holdings.
For already-dormant vulnerable coins, there is no easy solution. Some options include:
The Bitcoin community agrees on a "flag day," after which any un-migrated coins are declared destroyed.
Dormant quantum-vulnerable coins are left to be seized by anyone with a cryptographically relevant quantum computer.
The second option creates serious legal and security problems. Using a quantum computer to take possession of coins without the private key—even with a claim of legitimate ownership or good intentions—could trigger serious issues under theft and computer fraud laws in many jurisdictions.
Furthermore, "dormancy" itself is a presumption based on inactivity. But no one really knows if these coins lack a living owner who possesses the keys. Evidence that you once owned the coins may not provide legal authorization to break the encryption to reclaim them. This legal ambiguity increases the likelihood that dormant quantum-vulnerable coins fall into the hands of malicious actors willing to ignore legal constraints.
A final Bitcoin-specific issue is its low transaction throughput. Even if a migration plan is finalized, moving all quantum-vulnerable funds to post-quantum secure addresses would take months at Bitcoin's current transaction rate.
These challenges make it crucial for Bitcoin to start planning its post-quantum transition now—not because a cryptographically relevant quantum computer might arrive before 2030, but because the governance, coordination, and technical logistics required to migrate tens of billions of dollars worth of coins will take years to sort out.
The quantum threat to Bitcoin is real, but the timeline pressure comes from Bitcoin's own constraints, not an imminent quantum computer. Other blockchains face their own quantum-vulnerable funds challenges, but Bitcoin has unique exposure: its earliest transactions used Pay-to-PubKey (P2PK) outputs, putting public keys directly on-chain, making a significant portion of BTC particularly vulnerable to a cryptographically relevant quantum computer. This technical difference—combined with Bitcoin's age, value concentration, low throughput, and governance rigidity—makes the issue particularly acute.
Note that the vulnerability I described above applies to the cryptographic security of Bitcoin's digital signatures—but not to the economic security of the Bitcoin blockchain. This economic security stems from the Proof-of-Work (PoW) consensus mechanism, which is not vulnerable to quantum computers for two reasons:
PoW relies on hashing, so it is only subject to the quadratic quantum speedup from Grover's search algorithm, not the exponential speedup from Shor's algorithm.
The practical overhead of implementing Grover's search makes it extremely unlikely that any quantum computer could achieve even a modest practical speedup on Bitcoin's Proof-of-Work mechanism.
Even if significant speedups were achieved, they would confer an advantage to large quantum miners over small ones but would not fundamentally break Bitcoin's economic security model.
Costs and Risks of Post-Quantum Signatures
To understand why blockchains should not rush to deploy post-quantum signatures, we need to understand the performance costs and our evolving confidence in post-quantum security.
Most post-quantum cryptography is based on one of five approaches:
-
Hashing
-
Codes
-
Lattices
-
Multivariate quadratic systems (MQ)
-
Isogenies.
Why five different approaches? The security of any post-quantum cryptographic primitive is based on the assumption that quantum computers cannot efficiently solve a particular mathematical problem. The more "structured" that problem is, the more efficient the cryptographic protocols we can build from it.
But this is a trade-off: additional structure also creates more surface area for attack algorithms. This creates a fundamental tension—stronger assumptions enable better performance, but at the cost of increased potential for security vulnerabilities (i.e., the assumption being proven wrong).
Generally, hash-based methods are the most conservative in terms of security, because we are most confident that quantum computers cannot attack these protocols efficiently. But they are also the worst performing. For example, the NIST-standardized hash-based signature scheme, even at its smallest parameter setting, is 7-8 KB in size. In contrast, today's elliptic curve-based digital signatures are only 64 bytes. That's roughly a 100x size difference.
Lattice schemes are the main focus of deployment today. The only encryption scheme and two of the three signature algorithms NIST has selected for standardization are based on lattices. One lattice scheme (ML-DSA, formerly called Dilithium) produces signature sizes ranging from 2.4 KB (at the 128-bit security level) to 4.6 KB (at the 256-bit security level)—making it roughly 40-70x larger than today's elliptic curve-based signatures. The other lattice scheme, Falcon, has slightly smaller signatures (666 bytes for Falcon-512, 1.3 KB for Falcon-1024) but comes with complex floating-point arithmetic, which NIST itself flagged as a particular implementation challenge. One of Falcon's creators, Thomas Pornin, called it "the most complex cryptographic algorithm I have ever implemented."
The implementation security of lattice-based digital signatures is also more challenging than that of elliptic curve-based signature schemes: ML-DSA has more sensitive intermediate values and non-trivial rejection sampling logic requiring side-channel and fault protection. Falcon adds constant-time floating-point issues; indeed, several side-channel attacks on Falcon implementations have recovered secret keys.
These issues pose immediate risks, unlike the distant threat of cryptographically relevant quantum computers.
There are good reasons to be cautious when deploying the higher-performing approaches to post-quantum cryptography. Historically, leading candidates like Rainbow (an MQ-based signature scheme) and SIKE/SIDH (an isogeny-based encryption scheme) were broken classically, i.e., using today's computers, not quantum computers.
This happened very late in the NIST standardization process. This is healthy science at work, but it illustrates how premature standardization and deployment can backfire.
As mentioned earlier, Internet infrastructure is taking a deliberate approach to signature migration. This is especially notable given how long the Internet's encryption transition will take once it starts. The shift away from the MD5 and SHA-1 hash functions—technically deprecated by web governing bodies years ago—took many years to actually implement in infrastructure and is in some cases still ongoing. This happened despite these schemes being fully broken, not just potentially vulnerable to future technology.
Unique Challenges for Blockchain vs. Internet Infrastructure
Fortunately, blockchains like Ethereum or Solana, which are actively maintained by open-source developer communities, can upgrade faster than traditional web infrastructure. On the other hand, traditional web infrastructure benefits from frequent key rotation, meaning its attack surface moves faster than early quantum machines could target—a luxury blockchains do not have, as coins and their associated keys can be exposed indefinitely.
But overall, blockchains should still follow the web's deliberate approach to signature migration. Signatures in both settings are not exposed to HNDL attacks, and the costs and risks of premature migration to immature post-quantum schemes remain significant.
There are also blockchain-specific challenges that make premature migration particularly risky and complex: for example, blockchains have unique requirements for signature schemes, particularly the ability to aggregate many signatures quickly. Today, BLS signatures are often used for their ability to enable very fast aggregation, but they are not post-quantum secure. Researchers are exploring SNARK-based post-quantum signature aggregation. This work is promising but still early.
For SNARKs, the community is currently focused on hash-based constructions as the leading post-quantum option. But a major shift is coming: I believe that in the coming months and years, lattice-based options will become attractive alternatives. These alternatives will have better performance in various aspects than hash-based {SNARKs}, such as shorter proofs—similar to how lattice-based signatures are shorter than hash-based signatures.
The Bigger Problem Now: Implementation Security
For the next few years, implementation vulnerabilities will be a bigger security risk than cryptographically relevant quantum computers. For {SNARKs}, the primary concern is bugs.
Bugs are already a challenge for digital signature and encryption schemes, and {SNARKs} are much more complex. Indeed, a digital signature scheme can be seen as a very simple {zkSNARK} for the statement "I know the private key corresponding to my public key, and I authorized this message."
For post-quantum signatures, the immediate risks also include implementation attacks, such as side-channel and fault injection attacks. These types of attacks are well-documented and can extract secret keys from deployed systems. They pose a more immediate threat than distant quantum computers.
The community will work for years to identify and fix bugs in {SNARKs} and to harden post-quantum signature implementations against side-channel and fault injection attacks. Because the dust has not settled on post-quantum {SNARKs} and signature aggregation schemes, blockchains that transition prematurely risk locking into suboptimal schemes. They may need to migrate again when better options emerge or when implementation vulnerabilities are discovered.
What Should We Do? 7 Recommendations
Given the realities I've outlined above, I'll conclude with recommendations for various stakeholders—from builders to policymakers. The guiding principle: take the quantum threat seriously, but do not act on the assumption that cryptographically relevant quantum computers will arrive before 2030. That assumption is not supported by current progress. Still, there are things we can and should do now:
We should deploy hybrid encryption immediately.
Or at least where long-term confidentiality matters and the cost is bearable.
Many browsers, CDNs, and messaging apps (like iMessage and Signal) have already deployed hybrid approaches. Hybrid approaches—post-quantum + classical—defend against HNDL attacks while hedging against potential weaknesses in the post-quantum scheme.
Use hash-based signatures immediately where size is bearable.
Software/firmware updates—and other such low-frequency, size-insensitive scenarios—should adopt hybrid hash-based signatures immediately. (Hybrid is to hedge against implementation errors in the new scheme, not because the hash-based security assumption is in doubt.)
This is conservative and provides a clear "lifeboat" for society in the unlikely event of an unexpectedly early arrival of cryptographically relevant quantum computers. If post-quantum signatures for software updates are not deployed beforehand, we face a bootstrapping problem after a CRQC arrives: we will be unable to securely distribute the post-quantum cryptographic fixes we need to defend against it.
Blockchains need not rush to deploy post-quantum signatures—but should start planning immediately.
Blockchain developers should follow the lead of the web PKI community in taking a deliberate approach to post-quantum signature deployment. This allows post-quantum signature schemes to continue maturing in terms of performance and our understanding of their security. It also gives developers time to re-architect systems to handle larger signatures and to develop better aggregation techniques.
For Bitcoin and other L1s: communities need to define migration paths and policies regarding dormant quantum-vulnerable funds. Passive migration is impossible, so planning is essential. And because Bitcoin faces special non-technical challenges—slow governance and a large number of high-value potentially dormant quantum-vulnerable addresses—it is especially important for the Bitcoin community to start planning now.
Meanwhile, we need to allow research on post-quantum {SNARKs} and aggregatable signatures to mature (likely a few more years). Again, premature migration risks locking into suboptimal schemes or requiring a second migration to address implementation bugs.
A note on Ethereum's account model: Ethereum supports two account types with different implications for post-quantum migration—Externally Owned Accounts (EOAs), the traditional account type controlled by a {secp}256{k}1 private key; and smart contract wallets with programmable authorization logic.
In a non-emergency scenario, if Ethereum adds post-quantum signature support, upgradeable smart contract wallets could switch to post-quantum verification via a contract upgrade—while EOAs would likely need to move their funds to a new post-quantum secure address (though Ethereum would likely also provide specialized migration mechanisms for EOAs).
In a quantum emergency, Ethereum researchers have proposed a hard fork plan to freeze vulnerable accounts and let users recover funds by proving knowledge of their mnemonic using post-quantum secure {SNARKs}. This recovery mechanism would work for both EOAs and any smart contract wallets not yet upgraded.
Practical impact for users: Well-audited, upgradeable smart contract wallets might offer a slightly smoother migration path—but the difference is minor and comes with trade-offs in terms of trust in the wallet provider and upgrade governance. More important is for the Ethereum community to continue its work on post-quantum primitives and emergency response plans.
Broader design lesson for builders: Many blockchains today tightly couple account identity to a specific cryptographic primitive—Bitcoin and Ethereum to ECDSA signatures on {secp}256{k}1, other chains to EdDSA. The challenges of post-quantum migration highlight the value of decoupling account identity from any specific signature scheme. Ethereum's move toward smart accounts and similar account abstraction efforts on other chains reflect this trend: allowing accounts to upgrade their authentication logic without abandoning their on-chain history and state. This decoupling does not make post-quantum migration trivial, but it does offer more flexibility than hard-coding accounts to a single signature scheme. (This also enables unrelated features like sponsored transactions, social recovery, and multisignature).
For privacy chains, which encrypt or hide transaction details, prioritize earlier transition if performance is bearable.
User confidentiality on these chains is currently exposed to HNDL attacks, though severity varies by design. Chains where the public ledger alone enables full retroactive de-anonymization face the most urgent risk.
Consider hybrid schemes (post-quantum + classical) to hedge against the post-quantum scheme proving insecure even classically, or implement architectural changes that avoid putting decryptable secrets on-chain.
In the near term, prioritize implementation security—not quantum threat mitigation.
Especially for complex cryptographic primitives like {SNARKs} and post-quantum signatures, bugs and implementation attacks (side-channel attacks, fault injection) will be a much bigger security risk than cryptographically relevant quantum computers for the next few years.
Invest now in auditing, fuzzing, formal verification, and a defense-in-depth / layered security approach—don't let quantum concerns overshadow more immediate bug threats!
Fund quantum computing development.
An important national security implication of all the above is that we need continued funding and talent development in quantum computing.
A major adversary achieving cryptographically relevant quantum computing capability before the US would pose a grave national security risk to us and the rest of the world.
Maintain perspective on quantum computing announcements.
As quantum hardware matures, there will be many milestones in the coming years. Paradoxically, the frequency of these announcements itself is evidence of how far we are from cryptographically relevant quantum computers: each milestone represents one of the many bridges we must cross before arriving there, and each milestone will generate its own headlines and wave of excitement.
Treat press releases as progress reports requiring critical evaluation, not prompts for sudden action.
Of course, there could be surprising progress or innovation that accelerates the timeline, just as there could be serious scaling bottlenecks that prolong it.
I won't argue that a cryptographically relevant quantum computer within five years is absolutely impossible, just extremely unlikely. The recommendations above are robust to this uncertainty, and following them avoids more immediate, more likely risks: implementation errors, hasty deployment, and the ordinary ways crypto transitions go wrong.
Twitter:https://twitter.com/BitpushNewsCN
BitPush TG Discussion Group:https://t.me/BitPushCommunity
BitPush TG Subscription: https://t.me/bitpush