Crypto is a great technology for when traditional systems reach a limit. Bitcoin emerged against a backdrop in which traditional financial markets reached a limit of oversight. Ethereum emerged just when big tech firms were beginning to ossify and reach limits of user trust. The Bitcoin whitepaper compares proof-of-work to the process of mining for gold – a forecast of how Bitcoin today is thought of as “digital gold” and used as everything from legal tender, to ETF basket-stuffing, to (potentially) supply in an imminent US strategic reserve. In early 2015, Vitalik credited Gavin Wood with coining Ethereum’s “world computer” moniker, and ten years later we have decentralized exchanges, zero-knowledge proving tech, emergent social networks, horizontal scaling solutions, and competing L1s – all seeking to embody the world computer ethos. In other words, from the beginning, these new networks had a self-awareness around the limits they were overcoming.
Newer trends in crypto are also responses to apparent limits in traditional systems. I’ve been thinking about what these limits are, the ways in which crypto networks are intervening, and compiled a list of a couple of the ones that are most interesting (and in some cases, misunderstood) below.
Prediction Markets and News
The Limit: Reading the news today is mostly an act of decoding the truth from a mangled cipher: there’s real information being reported, of course, but it requires serious critical thought to be able to parse through bias. I still scan news homepages, and read physical copies of The Economist and The FT on weekends – but it’s an exercise in analytical vigilance. There are also a dwindling number of citations and links to primary sources – even if you wanted to find transcripts of congressional hearings, freely available data, links to various bills, or pretty much any point of origin for a news story in question, major media organizations have stopped linking to information that is readily available online (Matt Taibbi had a great piece on this trend).
There are lots of different reasons why the news has reached this terminal limit, but the simplest explanation is that after advertising dollars dried up (a result of platforms like Facebook and Google being able to aggregate attention and monetize it more effectively) the media became more reliant on subscribers, which resulted in audience capture. In this way, the news is less valuable as a way to imbibe a gentle cascade of the 5 W’s than it is a pseudo-index of “important things that are happening,” with the onus on the reader to do further investigation. Reading the news and expecting rigorous truth is sort of like investing in an ETF and expecting alpha.
Overcoming the Limit: Funnily, the idea that the news is now an “index” also means traditional media and prediction markets like Polymarket are converging in functionality. If you look at the most liquid politics markets on Polymarket, for example, you’ll see a mixture of predictions on European elections, Ukraine/Russian negotiations, trade tariffs, Fed decisions, and more. Predictions markets are significant not just because of the direction of the predictions themselves, but because they reveal what people are interested in predicting. This means that Polymarket might actually be more performant than the traditional media in terms of being “news-as-index” – because Polymarket is actually a matrix of not just what people think will happen, but weighted indicators of what people think is important.
Prediction markets could start to think of themselves as verifiers of anything past, present, or future. This capability already exists, albeit in piecemeal and prototype forms. Last year, Small Brain Games released TMR.NEWS, a prediction market where a user’s sole objective is to predict the headline for the following day’s NYT. To verify submissions and select a winner, the application uses Chat-GPT as an oracle to judge semantic similarity between predictions and , and Reclaim for zkTLS to bring offchain data onchain privately. Using a combination of zkTLS, and attested cameras, microphones, and other forms of hardware, it might be possible to build superior media organizations – where predictions operate as a form of bounty for a particular story.
Decentralized AI
The Limit: There are about 25 foundation models trained with over 10^25 FLOPS of compute – ranging from the first at this scale, GPT-4 (released in March 2023), to the most recent, Grok-3 (released in February 2025). The convergence of these models – achieving relatively similar benchmarks and spending in the tens of millions, or in the case of Grok-3, likely in the billions (100k+ H100’s are expensive) – suggests that there is a well-understood playbook in training foundation models and well-capitalized behemoths are the only ones who can execute on it. DeepSeek was of course a well-publicized exception – but experts have pointed out that the company likely understated their training costs by several orders of magnitude.
Cost is upstream of a much more pressing reality that the majority of these frontier models are developed by a small number of teams and in most cases not open-source (and even the ones that claim to be open-source don’t publish weights or training data). While there are signs that the reservoir of available data for training has dried up or that scaling pre-training is more generally slowing, there are ways to overcome this with synthetic data, algorithmic improvements, and architectural/training approaches (e.g. mixture-of-experts models over a vanilla transformer-based approach where all parameters are activated, better reinforcement learning, scaling test-time compute, etc.). But this requires a tremendous amount of compute. The limit in this example is both technological and ideological: compute is prohibitively expensive, and the small number of teams that are able to secure enough compute could potentially mediate how everyone experiences the internet (and all information) within a couple years. This is not a future I want to live in.
Overcoming the Limit: AI should be thought of like the internet or crypto networks: standards that are interpretable by diverse forms of hardware, which can play host to diverse forms of applications and protocols on top. Over the past two years, a number of teams have emerged that are exploring this in compelling ways throughout all phases of the AI training cycle: new model architectures that allow you to leverage data parallelism to train models across devices (while overcoming intensive VRAM requirements on a GPU), verification for fully trustless decentralized training, new approaches to distributed reinforcement learning, and entire distributed training runs that illustrate it’s possible to train multi-billion parameter models on hardware that isn’t co-located in a datacenter optimized for high-speed interconnects between GPUs.
Distributed and decentralized training is interesting because it’s solving both architectural and philosophical limits. A massive demand for GPUs for training (plus the associated costs both financially and energy-wise) means that there’s an incentive to pursue research in efficient training on consumer devices. We also shouldn’t take it for granted that the limited number of large, well-capitalized teams building open-source AI will continue (the team behind Nous Research, for example, has said that part of their inspiration for their work was the question: “what if Meta decides not to open-source Llama 4?”). So the only way to future-proof society against a world where foundation models and AGI is/are owned by a limited number of labs is to invest in teams that are pursuing fully decentralized, open-source approaches.
Stablecoins
The Limit: There are lots of different ways to frame the “limit” that stablecoins overcome. US Congress likes the notion of stablecoins because they allow the dollar to extend its hegemony in an era when the share of US treasuries as a percent of international reserves has declined from 65% to 57% over the course of 8 years (and is roughly flat in absolute dollar terms) and BRICs are exploring their own currency bloc. In other words, stablecoins are seen as a way to overcome the geopolitical risk of a slowing appetite for US treasuries by flipping demand for the dollar from governments to consumers. Stablecoins in this mode are basically a way to capitalize on the IP of the US dollar – and stablecoin legislation is basically a way to protect that IP from misuse and reputational risk. As an American, and someone who generally likes to see financial activity move onchain, I’m not complaining.
Another way of framing the “limit” is that stablecoins unleash new forms of economic productivity because they can offer payment rails at much lower take rates than ones offered by credit card issuers like Visa and Mastercard. The argument is that stablecoin settlement is ~$0.01 and instant, whereas merchants are charged 2-3% + $0.30 on credit card payments – so the decision to switch to stablecoins is obvious from a margin perspective. Similar logic is applied in areas like cross-border payments and remittances, and B2B payments. It’s a compelling argument, and one that’s captured well in Stripe's recent annual letter.
Overcoming the Limit: With stablecoin legislation likely to make it through Congress this year, it might seem like the “limit” represented by stablecoins is a little foregone at this point. But I’d argue that the limit of stablecoins is more one of imagination than it is a lack of structural momentum. In the case of disintermediating Visa and Mastercard, I think the framing is wrong. Credit cards are a pitch to consumers, not to merchants. “Revolve your debt, and get cashback or other kinds of rewards through our co-branded offering” is an extremely compelling offering, and why credit card issuers are able to charge merchants so much: consumers like the services offered by credit card companies, so merchants accept the fees issuers demand. And while Stripe should be lauded for championing stablecoins, it’s worth pointing out that they still charge a healthy fee of 1.5% on stablecoin transactions – less than the 2.9% they charge on normal payment rails – but far higher than the $0.01/per transaction that blockchains like Solana or Ethereum L2’s natively charge. So I think we need to be more realistic about how much in savings we might expect from traditional financial rails enabling stablecoin payments.
One thing that I’ve observed in talking to friends and scrolling through X is that everyone is building more software and AI agents – mostly aided by LLMs. Some of it is for personal use, some of it is for internal company use, and nearly all of it is probably useful for more than just n=1 or few people. Right now, my boyfriend and I have a side project building a data scraper and visualizer for the highest protein-per-dollar meals and restaurants in NYC – something we probably wouldn’t build a website or app around but could probably find ~100 people willing to pay for access.
With this in mind, stablecoins are a great way to fund long-tail software that would otherwise go unpublished because there is too much friction in making it public (setting up payment processing is much more difficult than accepting stablecoins via a wallet). Taking it a step further, some long-tail software and AI agents will also run on blockchains, making the ability to accept and direct payments a necessity. One big thing crypto unlocked is enabling people to work and get paid from anywhere by lowering the barrier to entry for monetization. Now LLMs are lowering the barrier to entry to create something that is monetizable. Not all onchain long-tail software will need to build a network or have a token – but it will need to be funded. If we accept that stablecoins are the best payment form for the internet, then it follows that a new wave of software that wouldn’t exist but for LLMs should be monetized that way.
Other Limits
This is just a non-exhaustive list. Other limits and crypto-native responses include:
Valuing attention: the Flashbots team has been doing interesting work on account encumbrance with TEEs, to essentially turn any web2 account into a smart contract (and enable accounts to be dynamically and conditionally controlled by third parties). With this capability, it’s possible to value and monetize web2 accounts in methods beyond advertising revenue or affiliate marketing. I think sxysun put this a very interesting way as “market-marking social capital”
Verifying humanity: It’s well-documented at this point that as AI becomes more sophisticated, we’ll need new interventions to prove who is human and who is a bot. World – with their orb and less hardware-intensive solutions like their World ID with Passport Credentials is one way to achieve this. Or we might see this built out at the application level on top of proving networks like Succinct.
Verifying AI: As AI proliferates, we’ll want the ability to 1) verify that models running behind APIs are running exactly as specified 2) submit private information to those models with assurance that no one will have access to their inputs. A lot of the topics I wrote about two years ago on using ZK to verify AI model inference are growing more sophisticated — EZKL is one team working in this area, and their work now includes the ability to have private models and/or data inputs.
Thank you to Jesse Walden and Tina He for feedback