More
    Home Blog Page 4

    Why Proof-of-Reserves Isn’t Enough to Trust Crypto Exchanges

    0

    What is proof-of-reserves?

    At its core, proof-of-reserves is a public demonstration that a custodian holds the assets it claims to hold on behalf of users, typically using cryptographic methods and onchain transparency.

    If every crypto exchange can publish a proof-of-reserves (PoR) report, why can withdrawals still be delayed or halted during a crisis?

    The truth is that proof-of-reserves is not a trust guarantee. It shows whether verifiable assets exist on a platform at a single point in time, but it does not confirm that the platform is solvent, liquid or governed by controls that prevent hidden risk.

    But even when executed properly, PoR is often a point-in-time snapshot that can miss what happened before and after the reporting moment.

    Without a credible view of liabilities, PoR cannot prove solvency, which is what users actually need during periods of withdrawal stress.

    Did you know? On Dec. 31, 2025, Binance’s CEO wrote that the platform’s user asset balances publicly verified through proof-of-reserves had reached $162.8 billion.

    What PoR proves and how it is usually done

    In practice, PoR involves two checks: assets and, ideally, liabilities.

    On the asset side, an exchange shows that it controls certain wallets, usually by publishing addresses or signing messages.

    Liabilities are trickier. Most exchanges take a snapshot of user balances and commit it to a Merkle tree, often a Merkle-sum tree. Users can then confirm that their balance is included using an inclusion proof, without everyone’s balances being made public.

    When done properly, PoR shows whether onchain assets cover customer balances at a specific moment.

    Did you know? Binance lets each user independently verify their inclusion in its PoR snapshot. Through its verification page, Binance generates a cryptographic proof based on a Merkle tree of user balances, allowing users to confirm that their account was counted without revealing anyone else’s data or balances.

    How an exchange can “pass PoR” and still be risky

    PoR can improve transparency, but it shouldn’t be relied on as the sole measure of a company’s financial health.

    Of course, a report on assets without full liabilities does not demonstrate solvency. Even if onchain wallets appear strong, liabilities can be incomplete or selectively defined, missing items such as loans, derivatives exposure, legal claims or offchain payables. That can show funds exist without proving the business can meet all of its obligations.

    Also, a single attestation does not reveal what the balance sheet looked like last week or what it looks like the day after the report. In theory, assets can be temporarily borrowed to improve the snapshot, then moved back out afterward.

    Next, encumbrances often do not show up. PoR typically cannot tell you whether assets are pledged as collateral, lent out or otherwise tied up, meaning they may not be available when withdrawals spike.

    Liquidity and valuation can also be misleading. Holding assets is not the same as being able to liquidate them quickly and at scale during periods of stress, especially if reserves are concentrated in thinly traded tokens. PoR does not address this issue; clearer risk and liquidity disclosures might.

    PoR isn’t the same as an audit

    A lot of the trust problem comes from a mismatch in expectations.

    Many users treat PoR like a safety certificate. In reality, many PoR engagements resemble agreed-upon procedures (AUPs). In these cases, the practitioner performs specific checks and reports what was found without providing an audit-style opinion on the company’s overall health.

    Indeed, an audit or even a review is designed to deliver an assurance conclusion within a formal framework. AUP reporting is narrower. It explains what was tested and what was observed, then leaves interpretation to the reader. Under International Standard on Related Services (ISRS) 4400, an AUP engagement is not an assurance engagement and does not express an opinion.

    Regulators have highlighted this gap. The Public Company Accounting Oversight Board has warned that PoR reports are inherently limited and should not be treated as proof that an exchange has sufficient assets to meet its liabilities, especially given the lack of consistency in how PoR work is performed and described.

    This is also why PoR drew increased scrutiny after 2022. Mazars paused work for crypto clients, citing concerns about how PoR-style reports were being presented and how the public might interpret them.

    What’s a practical trust stack, then?

    PoR can be a starting point, but real trust comes from pairing transparency with proof of solvency, strong governance and clear operational controls.

    Start with solvency. The real step up is showing assets versus a complete set of liabilities, ensuring assets are greater than or equal to liabilities. Merkle-based liability proofs, along with newer zero-knowledge approaches, aim to close that gap without exposing individual balances.

    Next, add assurance around how the exchange actually operates. A snapshot does not reveal whether the platform has disciplined controls such as key management, access permissions, change management, incident response, segregation of duties and custody workflows. This is why institutional due diligence often relies on System and Organization Controls (SOC)-style reporting and similar frameworks that measure controls over time, not just a balance at a single moment.

    Make liquidity and encumbrance visible. Solvency on paper does not guarantee that an exchange can survive a run. Users need clarity on whether reserves are unencumbered and how quickly holdings can be converted into liquid assets at scale.

    Anchor it in governance and disclosure. Credible oversight depends on clear custody frameworks, conflict management and consistent disclosures, especially for products that introduce additional obligations such as yield, margin and lending.

    PoR helps, but it can’t replace accountability

    PoR is better than nothing, but it remains a narrow, point-in-time check (even though it’s often marketed like a safety certificate).

    On its own, PoR does not prove solvency, liquidity or control quality. So, before treating a PoR badge as “safe,” consider the following:

    1. Are liabilities included, or is it assets only? Assets-only reporting cannot demonstrate solvency.

    2. What is in scope? Are margin, yield products, loans or offchain obligations excluded?

    3. Is it reporting a snapshot or ongoing? A single date can be dressed up. Consistency matters.

    4. Are reserves unencumbered? “Held” is not the same as “available during stress.”

    5. What kind of engagement is it? Many PoR reports are limited in scope and should not be read like an audit opinion.

    Source link

    What Role Is Left for Decentralized GPU Networks in AI?

    0

    Decentralized GPU networks are pitching themselves as a lower-cost layer for running AI workloads, while training the latest models remains concentrated inside hyperscale data centers.

    Frontier AI training involves building the largest and most advanced systems, a process that requires thousands of GPUs to operate in tight synchronization.

    That level of coordination makes decentralized networks impractical for top-end AI training, where internet latency and reliability cannot match the tightly coupled hardware in centralized data centers.

    Most AI workloads in production do not resemble large-scale model training, opening space for decentralized networks to handle inference and everyday tasks.

    “What we are beginning to see is that many open-source and other models are becoming compact enough and sufficiently optimized to run very efficiently on consumer GPUs,” Mitch Liu, co-founder and CEO of Theta Network, told Cointelegraph. “This is creating a shift toward open-source, more efficient models and more economical processing approaches.”

    Training frontier AI models is highly GPU-intensive and remains concentrated in hyperscale data centers. Source: Derya Unutmaz

    From frontier AI training to everyday inference

    Frontier training is concentrated among a few hyperscale operators, as running large training jobs is expensive and complex. The latest AI hardware, like Nvidia’s Vera Rubin, is designed to optimize performance inside integrated data center environments.

    “You can think of frontier AI model training like building a skyscraper,” Nökkvi Dan Ellidason, CEO of infrastructure company Ovia Systems (formerly Gaimin), told Cointelegraph. “In a centralized data center, all the workers are on the same scaffold, passing bricks by hand.”

    That level of integration leaves little room for the loose coordination and variable latency typical of distributed networks.

    “To build the same skyscraper [in a decentralized network], they have to mail each brick to one another over the open internet, which is highly inefficient,” Ellidason continued.

    NVidia, Business, Decentralization, AI, GPU, Features
    AI giants continue to absorb a growing share of global GPU supply. Source: Sam Altman

    Meta trained its Llama 4 AI model using a cluster of more than 100,000 Nvidia H100 GPUs. OpenAI does not disclose the size of the GPU clusters used to train its models, but infrastructure lead Anuj Saharan said GPT-5 was launched with support from more than 200,000 GPUs, without specifying how much of that capacity was used for training versus inference or other workloads.

    Inference refers to running trained models to generate responses for users and applications. Ellidason said the AI market has reached an “inference tipping point.” While training dominated GPU demand as recently as 2024, he estimated that as much as 70% of demand is driven by inference, agents and prediction workloads in 2026.

    “This has turned compute from a research cost into a continuous, scaling utility cost,” Ellidason said. “Thus, the demand multiplier through internal loops makes decentralized computing a viable option in the hybrid compute conversation.”

    Related: Why crypto’s infrastructure hasn’t caught up with its ideals

    Where decentralized GPU networks actually fit

    Decentralized GPU networks are best suited to workloads that can be split, routed and executed independently, without requiring constant synchronization between machines.

    “Inference is the volume business, and it scales with every deployed model and agent loop,” Evgeny Ponomarev, co-founder of decentralized computing platform Fluence, told Cointelegraph. “That is where cost, elasticity and geographic spread matter more than perfect interconnects.”

    In practice, that makes decentralized and gaming-grade GPUs in consumer environments a better fit for production workloads that prioritize throughput and flexibility over tight coordination.

    NVidia, Business, Decentralization, AI, GPU, Features
    Low hourly prices for consumer GPUs illustrate why decentralized networks target inference rather than large-scale model training. Source: Salad.com

    “Consumer GPUs, with lower VRAM and home internet connections, do not make sense for training or workloads that are highly sensitive to latency,” Bob Miles, CEO of Salad Technologies — an aggregator for idle consumer GPUs — told Cointelegraph.

    “Today, they are more suited to AI drug discovery, text-to-image/video and large scale data processing pipelines — any workload that is cost sensitive, consumer GPUs excel on price performance.”

    Decentralized GPU networks are also well-suited to tasks such as collecting, cleaning and preparing data for model training. Such tasks often require broad access to the open web and can be run in parallel without tight coordination.

    This type of work is difficult to run efficiently inside hyperscale data centers without extensive proxy infrastructure, Miles said.

    When serving users all around the world, a decentralized model can have a geographic advantage, as it can reduce the distances requests have to travel and multiple network hops before reaching a data center, which can increase latency.

    “In a decentralized model, GPUs are distributed across many locations globally, often much closer to end users. As a result, the latency between the user and the GPU can be significantly lower compared to routing traffic to a centralized data center,” said Liu of Theta Network.

    Theta Network is facing a lawsuit filed in Los Angeles in December 2025 by two former employees alleging fraud and token manipulation. Liu said he could not comment on the matter because it is pending litigation. Theta has previously denied the allegations.

    Related: How AI crypto trading will make and break human roles

    A complementary layer in AI computing

    Frontier AI training will remain centralized for the foreseeable future, but AI computing is shifting away to inference, agents and production workloads that require looser coordination. Those workloads reward cost efficiency, geographic distribution and elasticity.

    “This cycle has seen the rise of many open-source models that are not at the scale of systems like ChatGPT, but are still capable enough to run on personal computers equipped with GPUs such as the RTX 4090 or 5090,” Liu’s co-founder and Theta tech chief Jieyi Long, told Cointelegraph.

    With that level of hardware, users can run diffusion models, 3D reconstruction models and other meaningful workloads locally, creating an opportunity for retail users to share their GPU resources, according to Long.

    Decentralized GPU networks are not a replacement for hyperscalers, but they are becoming a complementary layer.

    As consumer hardware grows more capable and open-source models become more efficient, a widening class of AI tasks can move outside centralized data centers, allowing decentralized models to fit in the AI stack.

    Magazine: 6 weirdest devices people have used to mine Bitcoin and crypto