a16z: What the Merge Means for Ethereum ?

a16z2022-07-28 tarihinde yayınlandı2022-07-29 tarihinde güncellendi

Özet

Ethereum’s biggest-ever upgrade — the move to a proof-of-stake consensus mechanism — is right around the corner. But while the Merge should add security and sustainability, it doesn’t include sharding, the long-anticipated method of scaling the network.

After years of research, development and testing, Ethereum will transition from proof of work to proof of stake in the coming months. Instead of “miners” using computational energy to process transactions, “validators” will lock up, or stake, their assets in the network in return for ETH rewards. The upshot is increased security and a much smaller environmental footprint for the decentralized network.

Danny Ryan is an Ethereum Foundation (EF) researcher helping to coordinate the network upgrade, known as the Merge. It’s part of a larger constellation of upgrades, once referred to as Ethereum 2.0, aimed at making the network more secure, sustainable and scalable.

Ryan joined Future to talk about the Merge. In Part I of our conversation, below, he explains the decision to temporarily prioritize security and sustainability over scalability, how the upgrade enables liquid stakers and other emerging actors, and why Ethereum doesn’t take a day off.

In Part II, he talks about the features users will likely see in subsequent upgrades, whether on-chain voting could be used for future upgrade decisions, and why shadow forks are the way forward.

_
-

Two out of three: Security and sustainability

FUTURE: What is the Merge designed to accomplish?

DANNY RYAN: Abstractly, when I think about the things we’re trying to do to and for Ethereum at the layer-one protocol over the next handful of years, we’re trying to make it more secure, sustainable, and scalable — the three S’s — while still being decentralized (which can mean a lot of things, but multidimensional decentralization).

Layer one (L1)

A layer one is a blockchain that can process transactions without relying on another network. They include Bitcoin, Ethereum, and Solana.

The Merge accomplishes two of those things. The Merge is to help make Ethereum more secure. That’s an argument that people will have maybe until the end of time — that proof of stake is more secure than proof of work, or vice versa. But based on our research, understanding of these systems, understanding of types of attacks and things like that, generally the Ethereum community and researchers make a claim that proof of stake is more secure than proof of work.

[With regards to] sustainability, proof of work, to do its cryptoeconomic magic, burns a ton of energy. Proof of stake, due to its cryptoeconomic magic, does not. So we’re achieving something like 99.9, 99.95, 99.98% energy reduction depending on your napkin math, but nonetheless incredibly substantial.

[If Ethereum stayed on proof of work and] the price of ETH doubles, the new equilibrium of mining power on the Ethereum platform would double eventually. And in the proof-of-stake world, [if] the price of ETH doubles, the equilibrium of the number of nodes on the network doesn’t really change. There might be 10,000 nodes on the network. There might even be 100,000 nodes on the network. But it’s going to be 100 middle schools’ or 1,000 middle schools’ worth of energy consumption — not, like, Argentina or whatever.

We don’t get [scalability] out of the gate with the Merge. We do lay the foundation.

The Ethereum white paper says, “In the future, it is likely that Ethereum will switch to a proof-of-stake model for security, reducing the issuance requirement to somewhere between zero and 0.05X per year.” You mentioned not just security but sustainability. At what point did sustainability become as big a factor as security?

In the white paper, I don’t know if that’s touched on. But in some early Ethereum.org blog posts and just even in the world — then-2014, 2013 — the linear relationship between asset price and energy consumed on proof-of-work networks was very much known. I would say that when the Ethereum community began to be less insular and [started] onboarding non-crypto-native people into interesting applications, specifically in the art and NFT world, the energy component of this definitely came into the limelight because [of] increases in the ETH price, which increase the total mining power. Getting the limelight from different communities that had all sorts of different values alignments, that definitely became a more front-and-center component. But I would say that the “waste” of burning energy to demonstrate the crypto-economics in proof of work has not been something we’ve not known about; it’s definitely been a goal for quite a while.

The third S: Scalability

A lot of people have gotten ahead of themselves and looked forward to the things that the Merge is going to lay the groundwork for, such as lower fees, less congestion, and more. But at its most basic …

That’s that third S — scalability. And we don’t get that out of the gate with the Merge. We do lay the foundation, as you said.

So at this point, with just the move to proof of stake and no sharding until a later upgrade, we don’t have that third S. Where do things currently stand with scalability?

I like to be a bit tongue-in-cheek: Block times will be 12 seconds instead of an average 13 and a half seconds, but the gas limit will stay the same. So 10% scalability gain at the Merge. Take it or leave it.

That’s not the kind of scalability gains that we’re looking for, really. But scalable, more-sophisticated consensus mechanisms that can come to consensus on more are actually hard to construct in proof of work. There are some attempts to do things like sharding [the planned scaling mechanism for Ethereum] and other things in proof-of-work protocols, but you end up simulating a proof-of-stake protocol inside of a proof-of-work protocol. So I would say that [proof of stake] is a requisite foundation for future scalability upgrades.

Additionally, there is a scalability path happening in parallel to the Merge through layer-two constructions [using] rollups. There are paths that actually are online, and that people are beginning to adopt more and more, that give you 10-100x scalability of the current Ethereum platform with no changes. And future scalability upgrades to the layer-one platform would complement this and multiply it. So the nice thing is — although from layer one we’re targeting these first two S’s, security and sustainability — in parallel, we’re getting scalability through layer-two constructions, which are buying us time and are bringing to fruition much of the needs. Over time, we can complement that through more scale at layer one. (See part 2 of our conversation for how L1 sharding can provide additional scale.)

If you’re relying on layer-two solutions (protocols that sit atop Ethereum to increase throughput) for a certain degree of scalability, what are the security considerations in that?

It’s really easy to construct insecure layer twos, first and foremost. We believe that the most general-purpose secure construction are these rollups — optimistic and [zero knowledge, or] ZK. And one of the crucial components of this is that you publish transaction data or some sort of state transition data and certain ZK constructions on-chain — so you utilize the data availability of the chain. And that does limit the amount of scalability at the end of the day.

Layer two (L2)

L2s refer to technologies atop an L1 that assist with scalability.

Rollups

Rollups process transactions off the main network before bundling them together and sending them back to the L1 network.

Sometimes people look at that and go, “Well, let’s just not do that. We’ll essentially do a rollup but we won’t publish the data, and we can, like, do side construction.” So all of a sudden, the incentive to get more scale is also the incentive to potentially cut corners on some of these layer-two constructions. Thus, I think some of the security concerns here are that it’s very difficult to understand the tradeoffs. If you had a pure L2 that didn’t cut corners, then you inherit the security of Ethereum. But if you have an L2 rollup that’s like, “Well, we’re pretty much a rollup,” then you not only don’t inherit the security of Ethereum, but by many orders of magnitude the threat profile enhances as those corners are cut.

I think it’s very difficult for a consumer to look at L2 “A” and L2 “B” and understand that L2 A is, like, 1,000 times more secure than L2 B — especially when language is unclear, especially when it’s hard to see what’s actually going on. L2Beat is this independent third party that’s trying to just catalog this information so we can better understand the security tradeoffs here. But nonetheless, that’s certainly an issue when you have L2s that aren’t quite really what they say they are.

Another issue would be complexity. L1 has a certain risk profile in relation to the types of bugs that might be introduced, the complexity of the software and things. And so when you make an L2, you’re taking that and then you’re adding a bunch of complexity. You’re adding this whole derivative system and so there’s risk there, insecurity.

And then I would also say there’s a desire and a need to keep these L2-derivative systems upgradable. It’s hard for me to construct an L2 that can’t ever upgrade if I assume that L1 might upgrade. That’s where the need comes in. And there’s also a desire. I think many people constructing L2s want to get them out the door, but they also want to enhance the feature set over time. So there’s also a desire to upgrade these systems over time. Because of that, there’s also potential security risks. So what are the upgrade models? Is it upgradable by, like, three dudes and they have to sign a message? Is it upgradable by a DAO? Is that safe? Is it upgradeable instantly? Or does it give you like a year of lead time?… And there’s a whole spectrum of design here. The theoretical perfect L2 inherits the security of Ethereum. There are a lot of different things that augment that statement, though.

We believe there’ll be easily an order of magnitude more distinct validating entities than there were mining entities, which I think is good.

MEV, liquid staking, and the evolving Ethereum ecosystem

With the move to proof of stake as well as the infrastructure and incentive changes that come from the Merge, what sort of new actors or project types do you see coming to the fore?

Certainly, in with the validators, out with the miners. So that’s a shift in actor. We believe there’ll be easily an order of magnitude more distinct validating entities than there were mining entities, which I think is good.

In parallel over the past couple of years, the MEV (miner extractable value or maximal extractable value) space has created a few different actors. This is kind of independent of the Merge, though. There are now entities that specialize in searching [and] trying to find optimal configurations of blocks. Then there are intermediaries in there that help combine searchers into valuable blocks and then sell them essentially to miners or validators. So there’s this whole extra protocol construction of different actors that are playing this MEV game, which apparently, seemingly, is very high value, high stakes. That’s kind of independent, although there are things that the L1 protocol can probably do to make that whole construction in reality safer. (To hear more about how Ethereum can address MEV at the L1 level, read the second part of our conversation.)

So there’s those actors. I would say staking derivatives are very interesting. There are many different versions of this, but essentially: When you’re staking, that has a certain risk profile — somebody is staking for you or you’re doing it yourself. And then there’s some representation of that underlying staked asset, which maybe you can trade or maybe you can bring into smart-contract world and bring into DeFi and things like that.

I know LIDO is probably the most popular. There’s a handful of them, and there’s a bunch that are also up-and-coming. So there’s a lot of different players in relation to that. There are DeFi entities getting involved kind of closer into the staking world. There are DAOs governing staked derivatives, there are consortiums governing staked derivatives, there’s all sorts of fun stuff that shakes out of that world.

Right, and there was some discussion about whether LIDO, which stakes a lot of ETH to the beacon chain on users’ behalf, was hitting the max of what was good for a decentralized network.

I wrote a piece called The Risks of LSD — liquid staking derivatives. Maybe I mentioned LIDO as just an example. Some people assume that you can construct these things in ways that do not have the same kind of centralization concerns that you would if it was the single operator accumulating certain key thresholds. I make an argument in that piece that that is not the case — that you do get substantial risk when you pass one-third, one-half, and two-thirds. And that for some reason, because of the derivative nature here, we don’t acknowledge those risks quite the same. Thus, the market seems to be demanding to exceed those thresholds.

So I make the claim that if I’m a staking derivative, DAO, or controller or whatever, it’s probably in my best interest not to exceed those thresholds because of the risk that it induces for my protocol and for my users. And I make the claim that [for] users, it’s not actually in their best interest, even though liquidity begets liquidity and being involved with a highly liquid staking derivative can have its benefits — that the risks begin to exceed such benefits. So my claim is: Let’s not not pay attention to the risks because the benefits are so great, and let’s wise up or else something bad probably will happen and then the market will probably get wiser.

[Editor’s note: In June 2022, LIDO holders voted down a governance proposal to explore setting limits on the amount of ETH staked through the platform.]

Some of the security gains, from my understanding, are that you’re going to get increased decentralization because it’s going to become easier to participate — not necessarily as a staker, but as a non-block-producing node. How much of the security gains are from increased user participation, and how much are attributable to other factors?

You probably get some sort of decentralization gain because proof of work and proof of stake require posting some sort of particular collateral, and it’s much easier to get the collateral for proof of stake because of the open markets to buy ETH. So it’s much easier for many participants to participate with the same edge in terms of access to that capital. Whereas in proof of work, the capital required is highly specialized machinery, you know, ASICs or GPUs.

Long story short, I think there are gains in decentralization and I think there are gains due to the type of cryptoeconomic capital — making it a bit more egalitarian, reducing the economies of scale.

But a lot of what my claim would be [is] in the actual way the protocol is constructed: In proof of work, pretty much we can just reward. So if you do a good job, you end up making money. If you do a bad job, there’s opportunity costs. But if you explicitly attack, you don’t really lose anything. Whereas in proof of stake, if you do a good job, you make money. You do a bad job — you know, you’re offline, things like that — you stand to lose some money. And if you do explicitly nefarious things like contradict yourself and try to create reorgs and two different chains, you can lose tons of money. You can lose all of your money, depending on the extent of what’s detected.

Because the asset is in the protocol — the staked ETH — that asset can be destroyed. It’s kind of akin to: the protocol cannot burn somebody’s mining farm down if they tried to attack the chain, but the protocol can burn the staked ETH if they try to attack the chain. Not only do we get the rewards, but we can have punishments, so the security margin on the capital that is staked can be much higher. That’s the [explanation] for a lot of why we say it’s more secure.

Decentralization, access to the asset required, reduced economies of scale, and other stuff like that help as well.

There’s a lot going on with Ethereum all day, every day. There’s an expectation that it’s up. And that’s the expectation that we’re trying to keep.

Doing it live

This entire upgrade is being done without any pause to transactions. And the Ethereum.org website states: “Ethereum does not have downtime.” Why was this such an important consideration? Why not just take a day, advertise in advance, and make the swap?

For one, I don’t know how much that will reduce the complexity. At the end of the day, we still have to coordinate on something, and we still have to agree where the end is and where to start. And once you have to do that, a day is probably not sufficient time to coordinate.

If you actually wanted to do that — to stop, then everyone upgrades their nodes and then it starts again — I would say three days minimum, probably more like a week in terms of actually having success and coordinating. Maybe if you really give lead time [and] everyone knows it is going to happen, it could be 48 or 72 hours. I don’t think it would be just a day.

So then the question is: What’s lost in that day? Probably a lot. I know the DeFi bros would be quite mad. It is a functioning economy. There’s a lot going on with Ethereum all day, every day. There’s an expectation that it’s up. And that’s the expectation that we’re trying to keep.

Again, I don’t know, maybe you can reduce the complexity by around 20% if you don’t do it live, but that’s probably not worth the losses of being offline for three days — both in terms of real numbers of the transaction activity on those days but also in terms of what people expect out of Ethereum. I think we would shatter that a little bit, but I don’t know. It’s the way it will be done unless there’s a concerted miner attack beforehand, and I don’t think it adds too much complexity. There was a pretty clear path on how to do it that way, so I think it made sense.

Coordination on a decentralized network

FUTURE: You alluded to the possibility that miners will fork and continue trying to use the old chain. But for the most part, this process has gotten everybody on board. What is your role in that as an Ethereum Foundation researcher? How does such a massive move get coordinated?

DANNY RYAN: I started getting involved in proof-of-stake stuff in around 2017, and even then it felt like a foregone conclusion. That was five years ago. And the Ethereum community has been very willing to not stagnate and to do it right, and construct a protocol that doesn’t just work today but works, hopefully, for 100 years or more.

Thus, early in its ethos, when there was a hunch that proof of stake could be done securely and better than proof of work, people were very excited about that. And by the time 2016, 2017 rolls around, people are not only excited about it, but they’re anxious for it to happen. It seems like it’s kind of very deep in the Ethereum community’s ethos that this is going to happen.

There are more sensitive issues. There are less foregone conclusions where the EF, the research team, and the clients that are outside of the EF are all trying to come up with solutions to problems and keep things moving. Sometimes the solutions are in a bit more of a gray zone — is this the right solution? Do we do it now? Do we do it later? That ends up being tough, and the EF attempts to help coordinate in those methods, help do some R&D to help vet solutions, help facilitate conversations to decide on timelines and priorities and orders.

But at the end of the day, on most items, the EF agenda is to help make the protocol more sustainable, secure, and scalable while being decentralized — and not to ship a particular feature over the other. So, a lot of what we are focused on when it comes to both technical work and social coordination is around facilitating good information, good research, and good dialogue so that the many participants involved in the R&D, the engineering, and the community can keep things moving and come to decisions.

In the last five years there have been a lot more voices added to the community, and after the Merge, it’s theoretically going to become more decentralized. What thoughts do you have about the future process for upgrades? Is it possible that we’ll be looking at some sort of layer-one DAO to coordinate upgrades?

As I understand it, the Ethereum community is not into on-chain voting — or any sort of plutocratic voting and upgrades — and that the protocol is the one the users decide to run. Generally, there’s broad consensus. Sometimes there’s schisms — for example, Ethereum vs. Ethereum classic. But at the end of the day it’s your right and the community’s right and users’ rights to figure out what software they want to run. Generally, we agree because people are trying to make Ethereum better, and there’s not a lot of conflict in some of the core stuff there.

So I don’t expect a formal technical mechanism. I do expect the process to continue to grow and change and evolve in this kind of loose governance, where there’s researchers, there’s developers, there’s community members, there’s dapps, and things like that.

I would say that — and I think you alluded to it — there’s more and more people at the table, and it’s getting harder and harder to make decisions and ship things. I personally believe that that’s a feature. I do think that both from a reliability standpoint for applications and users, and from avoidance of capture in the long run, that it’s probably important for a lot of the Ethereum protocol to ossify. So although it is increasingly difficult to be in the maelstrom of governance and try to ship, and sometimes it feels like I’m trying to run with a weighted vest and weights on my ankles and now I’ve got weights on my wrists, I think we have some key stuff to get done over the next few years. But I think it’s going to be harder and harder to get things done. And I think that’s a good thing.

Vitalik calls it “functional escape velocity.” Let’s get Ethereum to a place where it has sufficient scale and functionality that it can be extended and utilized in an infinite multitude of ways in the next layer of the stack. Have the EVM have minimum sufficient functionality, have there be enough data availability to handle massive amounts of scale, and then applications can extend it in smart contracts. Layer twos can experiment with new VMs inside of their layer-two constructions; you can scale Ethereum and so on and so forth.

I think it’s going to be harder and harder to get things done. And I think that’s a good thing.

Shadow forks

One of the things that came out of this specific testing process was shadow forks, the process of copying real Ethereum data to a testnet to simulate a mainnet testing environment. Was that always in the plan? And how do you think that might change the R&D process for future upgrades?

We should have been doing shadow forks for the past four years. They’re great; they’re really cool. I essentially take a number of nodes that we control — call it like 10, 20, 30 — and they think a fork’s coming, so they’re on mainnet or one of these testnets and then at some fork condition, like block height, they all go, “Okay, we’re on the new network.” And they fork and they then hang out in their own reality, but they have the mainnet-size state.

And for a while you can pipe transactions from mainnet onto this forked reality to get a reasonable amount of what looks like organic user activity, which is really good. It allows us to test what ended up being highly organic processes that are hard to simulate. And that’s been great. Pari [Jayanthi] and others who work on the DevOps team at EF have been orchestrating these, and we learned so much from them. I think if you ask anyone, they’d be like, “Well, yeah, it would have been great if we were doing this three years ago, four years ago on every upgrade.”

But I will say another thing. I’ve been saying it [since] a year ago and now we’re in the long tail in security and testing: It’s really pummeling this thing, making sure all the edge cases are correct, making sure that when it comes, it happens — we take one shot at it and it works. And it turns out, the way that the software is constructed with consensus-execution layer clients, there’s just a lot to build in terms of testing. Shadow forks is one of them. Utilizing other simulation environments that can test these two things together, like Kurtosis, Antithesis, and others.

There’s some other stuff we need to do, like rewiring Hive, which is our integration nightly build test framework, so that it can handle both of these types of clients and so that you can write tests where different complexities are happening on both sides of the aisle. All that had to happen. First, the frameworks had to be developed or modified. Then a lot of the tests had to be written. So the nice thing with the Merge is we’ve really enhanced the tools in our toolbelt to be able to test upgrades in such a way that the next upgrade will be much more about writing the tests rather than thinking about how to even test it and writing the frameworks to test it.

What’s after proof of stake?

Since this has been going on for a long time, initially sharding was going to come first. But ecosystem developments meant you could first move to proof of stake. Were there other ecosystem developments that popped up during this process that might shift your approach toward future upgrades?

First of all, there are probably a number of reasons the proof-of-stake shift was prioritized. One was to stop overpaying for security with proof of work. And the other was that scale was beginning to come through these layer-two constructions. So, maybe if you have 10-100x scale coming from that, you can focus on this other thing and finish the job and unify these two disparate systems: the beacon chain and the current mainnet.

There are some other things that have affected how we think about timelines and priorities. I mentioned earlier that the whole MEV world has thrown a wrench into some things. There are centralization and other security concerns that emerge when you start thinking about where MEV might go. And there’s been a whole lot of research over the past 12-plus months on how to mitigate some of these concerns with layer-one modifications. Depending on the analysis of threats coming from MEV world, that might prioritize certain security features and security additions to L1 over something else that maybe was expected to be the priority.

I think something that is interesting is the sharding roadmap and the current expected construction, which is called danksharding, named after Dankrad [Feist], our researcher at the EF. The whole construction is actually simplified when you assume these highly incentivized MEV actors exist. Not only have some of these external actors altered how we think about security, but they also alter how we can think about the construction of these protocols. If you assume MEV exists, if you assume these highly incentivized actors are willing to do certain things because of MEV, then all of a sudden you have this third-party participant in the consensus that maybe you can offload things to, which in many ways can be simplifying. So there’s not only bad things that come, but there’s also new types of designs that open up.

We’ve really enhanced the tools in our toolbelt to be able to test upgrades in such a way that the next upgrade will be much more about writing the tests rather than thinking about how to even test it.

Is stateless Ethereum still being actively discussed and researched?

Yes. The state — all of the accounts and contracts and balances and stuff — that’s the state of Ethereum. Given where you are in the blockchain, there’s a state of reality. That thing grows over time, grows linearly. And if you increase the gas limit, it grows even faster. So this is a concern. If it grows faster than the memory and hard drive space of consumer machines, then you have issues with actually being able to run nodes on home computers and consumer hardware, which has security and centralization concerns. Also, if you talk to some of the GETH [client] team members, the fact that the state keeps growing means that they have to keep optimizing stuff. So it’s hard.

Stateless Ethereum and things in that research direction are a potential solution path for this, where to execute a block I don’t actually need the entire state; there’s kind of this hidden input on executing the function of a block. I need the pre-state, I need the block, and then I get the post-state to know if the block is valid. Whereas with stateless Ethereum, the state requisites — the accounts and other things that you need to execute that particular block — are embedded in the block and are proofs that those are the correct state. Now executing a block and checking the validity of Ethereum becomes just [having] to have the block, which is really good. Now we can have full nodes that don’t necessarily have full state. It opens up a whole spectrum of how to construct nodes. So I might have a node that fully validates and doesn’t have the state, I might have a node that just keeps state relevant to me, or I might have very full nodes that have all the state and that kind of stuff.

This is actively being worked on. There is actually, I believe, currently a testnet up with all the other fun stuff that needs to happen to make this happen. My current assessment is that the demand for sharding and L1 scale is higher than the imminent threat of state growth. So it’s very likely, as one will be prioritized over the other, that the scale will be prioritized.

That said, it’s hard to say. There’s “proto-danksharding,” which is kind of like a stepwise way to get a bit more scale. Maybe that happens and then stateless happens and then full sharding happens, depending on the needs and assessment of what’s going on and the threats involved. I think the general thought on state growth is that we must have a path and we must fix it, but [that] the imminent fires have been put out and that this isn’t a thing that will cripple Ethereum the next couple of years. But it’s a thing that must be fixed.

Walk me through the upgrades that we do know for after the Merge. Will there be a cleanup upgrade? Is that separate from the Shanghai upgrade? And when does sharding get introduced?

Shanghai is likely to be the name of whatever the fork is after the Merge. To actually withdraw your funds that you’ve been staking for almost two years now — [that does] not get enabled at the Merge. They initially were expected to be done, but given the complexity of the Merge, over time, it was decided to really strip it down and just get the Merge done and not add the extra functionality of withdrawals. I would very, very, very much expect that withdrawals are enabled in Shanghai — so, the first upgrade after the Merge. This has been promised to many, many people who have a lot of capital on the line and I don’t expect any issue with that. These are generally specified, there’s tests written, and that kind of thing.

There’s a number of other EVM [Ethereum Virtual Machine] improvements that I think would make it into this system — different mathematical operations, some different extensibility things, a bit better versioning within the EVM, and other features. It’s a bit of a pressure-release valve on EVM improvements, which have been put to the side for multiple years now to do the Merge and other upgrades. And people really want to see some sort of minor scalability upgrade here. So it could be either proto-danksharding, which lays some of the foundation for full sharding and gets a little bit more scale, or potentially calldata gas-price reductions, which are very easy but aren’t really a sustainable solution. So that’s what we expect, hopefully, in Shanghai: withdrawals and a bit of scale.

İlgili Okumalar

Bitcoin Trading Strategy Breakdown: Celebrity Predictions and Classic Models All Fail, Only These Four Indicators Remain

Analysis of Bitcoin Trading Strategies: Why Celebrity Forecasts and Classic Models Fail, Leaving Only These Four Reliable Indicators This analysis examines the failure of common Bitcoin prediction methods and identifies four reliable indicators for constructing a trading strategy. The author reviewed all major BTC prediction approaches from 2017-2025, categorizing them into three groups: celebrity price targets (consistently over-optimistic), analytical models like Stock-to-Flow (broken post-2022), and on-chain signals. The key finding is that more data often creates confusion, not clarity. The strategy discards unreliable elements: celebrity predictions (incentivized to be extreme), pure models (invalidated by post-ETF market changes), and the Fear & Greed Index used alone (too many false signals). Four reliable indicators were selected: 1. **MVRV Z-Score:** Accurately identifies cycle bottoms when entering its green zone (e.g., 2018, 2020, 2022). Note: Its ability to call tops is now ineffective post-2024. 2. **SOPR (28-day MA):** Consistently signals bottoms when below 1.0, indicating holders are selling at a loss. 3. **ETF Net Flow:** A crucial post-2024 metric showing institutional momentum (e.g., sustained inflows = buying). 4. **Macro Liquidity (Fed policy & M2):** Sets the overall directional bias (e.g., bullish during easing cycles). The core strategy involves waiting for a multi-signal共振 (resonance). For example, a bottom signal requires MVRV in the green zone + SOPR < 1.0. A top signal requires overheated on-chain data + sustained ETF outflows. Macro policy sets the overall direction. The Fear & Greed Index is only used as a weighted confirmatory signal, never alone. Action is only taken when three or more indicators align. The author automated this into a monitoring system that sends Telegram alerts only when signals trigger. As of the article's date (April 15, 2026), the system showed a strong bottom signal: extreme fear (F&G=12), MVRV in the buy zone, and SOPR < 1.0. The only contrary signal was weak ETF flows. Historically, such triple on-chain共振 has preceded 100%+ returns. The conclusion emphasizes building a personal framework over relying on external predictions, allowing for iterative improvement and customization based on individual risk tolerance.

marsbit1 saat önce

Bitcoin Trading Strategy Breakdown: Celebrity Predictions and Classic Models All Fail, Only These Four Indicators Remain

marsbit1 saat önce

İşlemler

Spot
Futures
活动图片