Carolina Goldstein

I provide advisory services, where I dive into market risk evaluation, parameter optimization, competitive analysis, tokenomics advisory, and mechanism design for DeFi protocols.

Research is at the heart of what I do, where I explore different DeFi market segments and engage in both macro overviews as well as detailed analyses of financial topics and crypto products, particularly at the application level.

My academic journey began with a foundation in Mathematics applied to Economics, followed by a Master’s in Big Data Analysis and Engineering. My interest lies at the intersection of analytics, finance, and crypto, where I seek to push the boundaries of knowledge and make impactful contributions.

Back

Can the next bull bring true adoption?

by Carolina Goldstein • Wednesday, December 20 2023

Crypto's bullish cycles attract industry newcomers, but sustaining interest and users beyond the hype presents a challenge. The data highlights a clear correlation: adoption metrics rise alongside peak market capitalization in each bull run. In the last cycle, market cap surged from \$300 billion in summer 2020 to nearly \$3 trillion in November 2021, only to contract to \$800 billion within a year. The Chainalysis global crypto adoption index further reinforces the parallel between adoption and market cap during that period.

As of late October, a shift in market sentiment is evident, with the global market cap doubling to approximately $1.6 trillion since its recent low point. The latest impressive 11-week streak of positive inflows to crypto funds suggests new attention, but can this cycle bring lasting adoption?

Why is widespread adoption of crypto important?

Crypto strives for global financial inclusion by overcoming traditional banking barriers and geographical limits. Eliminating intermediaries and reducing transaction costs enables swift, cost-effective cross-border transfers. Widespread adoption democratizes investment, granting retail investors access to DeFi and opportunities previously exclusive to financial institutions.

Additionally, crypto adoption drives technological innovation, expanding applications to sectors like supply chain management, healthcare, and voting systems. It elevates standards for trust and transparency through immutable record-keeping, mitigating the potential for fraud and corruption.

How far are we?

Bull markets raise awareness, a crucial first step. Institutional adoption, influenced by political and regulatory acceptance, leads the path to individual adoption. A future where cryptocurrencies seamlessly integrate into daily life requires addressing key challenges.

Security

Security concerns in DeFi pose a significant hurdle to widespread adoption. The recent attack on a widely used library within Ledger’s ConnectKit reminded us of these vulnerabilities, leading to a loss of $500k USD in crypto and temporary halts in projects. Despite Ledger's quick response, the fact that one of the most popular hardware wallet providers is subject to security breaches is concerning.

Encouragingly, 2023 shows a positive trend with a 50% decrease in hack volumes compared to the previous year.

Increased attention and accountability in centralized exchanges, exemplified by figures like Binance's CZ stepping down and paying fines, indicate industry advancement. Unlike FTX's sudden downfall causing a two-year market cap low of $796 billion, prompt identification and rectification of leadership mistakes offer reassurance. In a secure and regulated environment, the FTX scenario might have been effectively managed.

Regulatory clarity

Establishing clear rules in the crypto industry is vital for earning trust from major players. While the "Move fast and break things" mantra is effective initially, it's not the ideal approach to win over the masses. Previously, for institutions venturing into blockchain it felt like building construction projects in a politically unstable country — uncertainty about project shutdowns, team availability, and project usability for clients. The global trend towards clearer crypto rules, notably in Europe, Singapore, Hong Kong, and the UAE, not only averts chaos but also transforms these locations into digital innovation hubs.

Ease of use

Long-time crypto participants already notice significant DeFi user experience improvements. Innovations like Account Abstraction (AA) via ERC-4337 and native integration in Layer 2 solutions such as zkSync, aim to elevate crypto’s user experience to web2 levels. AA introduces key benefits, enabling smart contracts as primary accounts with features like two-factor authentication and Social Recovery, removing the need for owners to manage seed phrases—a responsibility unfamiliar in web2 and potentially intimidating. Smart contract wallets enable gasless transactions and multiple actions in one transaction through the Multicall feature. The support for multiple valid signers in smart contract wallets, crucial for organizations, highlights the potential for mainstream adoption.

Though in an early phase, broad support from top-tier DeFi protocols is vital for mainstream integration of smart contract wallets. On-ramping is also becoming more streamlined, with solutions like Moonpay, Revolut, or Ramp connecting credit cards to Metamask in under 10 minutes.

Chain abstraction progress, led by Connext and similar protocols, is crucial for widespread DeFi adoption. The goal is user empowerment without navigating blockchain complexities. Users should access DeFi without delving into ETH versus L2s, L1s, or intricacies of the Cosmos ecosystem.

Institutional Involvement

Institutional adoption drives crypto acceptance, evident in growing ETF filings post-BlackRock's announcement. Beyond financial investments, institutional adoption extends to service providers streamlining crypto's interoperability with daily life, such as salaries and payment methods. Blockchains like Solana, processing 65,000 transactions per second, already exceed the throughput needs of traditional giants like Mastercard and Visa. Visa's recent partnership with Solana signals a notable evolution in payment systems.

Institutional-grade custody solutions are crucial for secure entry of financial institutions into crypto-related products. BlackRock's collaboration with Coinbase for the BlackRock ETF emphasizes the role of licensed digital asset custodians like Coinbase, Gemini or Anchorage in enabling financial institutions to integrate digital assets into their business operations in a safe, scalable, compliant manner.

Institutions are actively building solutions, as seen in the Canton Network announced by Goldman Sachs, BNP Paribas, Deloitte, and over 30 firms. This privacy-enabled, interoperable blockchain network targets traditional financial market participants and institutional assets.

Building in the crypto space becomes more accessible with user-friendly, no-code environments like Metis, a Layer-2 solution looking to empower smaller businesses. This shift aligns with the industry's focus on tokenization and RWAs. Leaders like BlackRock endorse this surge, foreseeing a $290 trillion unlock in new market assets, offering unique investment and liquidity opportunities.

Key takeaways

Amidst transformative shifts in the crypto landscape, the rise of account and chain abstraction is a pivotal catalyst, aligning user experience with familiar standards. Large protocols must embrace this change for industry advancement.

While security concerns persist, encouraging signs indicate progress, with standardized processes and regulatory frameworks contributing to improvement. The evolving regulatory landscape instills confidence among institutions, fostering a safer crypto environment.

Institutions are actively participating, driven by the promise of ETFs and the integration of large payment networks, facilitating everyday crypto use. The exciting potential of this cycle lies in the start of real widespread adoption, showcasing the industry's resilience and maturation.

Back

Exploring the DeFi Tokenomics Landscape

by Carolina Goldstein and Tiago Fernandes • Tuesday, October 17 2023 • Published at Three Sigma

Introduction

Tokens play a significant role in DeFi systems, and they differ in goals, value accrual mechanisms, and general integration in their ecosystem. These digital assets serve as versatile tools, assuming roles as utility tokens for transactions and access, governance tokens for decision-making, or revenue-sharing tokens for community wealth distribution. Tokens operate across a diverse range of contexts within DeFi, from decentralized exchanges and lending platforms to the underlying infrastructure that drives it all.

In this article, we will delve into the most relevant token mechanisms driving DeFi, from liquidity mining and staking to vote escrow and revenue sharing models, unraveling how these mechanisms shape the current landscape of blockchain protocols and how they are used by different protocols.

In this research we included tokens of the following protocols: 1inch Network, Aave, Abracadabra, Alchemix, Angle, Ankr, ApolloX, Astroport, Balancer, Beethoven X, Benqi, Burrow, Camelot, Chainlink, Cream Finance, Compound, Convex Finance, Curve Finance, DeFi Kingdoms, dForce, dYdX, Ellipsis Finance, Euler Finance, Frax Finance, Gains Network, GMX, Hashflow, Hegic, HMX, Hundred Finance, IPOR, Lido, Liquity, Lyra, MakerDAO, Mars Protocol, Moneta DAO (DeFi Franc), MUX Protocol, Notional, Osmosis, Orca, PancakeSwap, Perpetual Protocol, Planet, Platypus Finance, Premia, Prisma Finance, QiDao (Mai Finance), Reflexer, Ribbon Finance, Rocket Pool, Solidly Labs, SpookySwap, StakeDAO, StakeWise, Starlay Finance, SushiSwap, Synapse, Tarot, Tectonic, Thales, Thena, Uniswap, UwU Lend, Velodrome, XDeFi, Yearn Finance, Y2K Finance, Yeti Finance.

It is worth noting that this is not an exhaustive list of all DeFi tokens but rather a representative selection, focusing on those that introduce innovations or slight variations in token mechanisms.

Our framework

To explore the various roles tokens play in DeFi, we'll take a systematic approach. Following an examination of 50+ DeFi protocols, a prevailing trend is clear: the majority of protocols offer users a way to access rewards through their tokens. These rewards can range from tangible benefits to more abstract forms of value and might include discounts on protocol features, enhanced returns for liquidity providers, inflationary incentives, a share of protocol revenues, or the ability to vote on key decisions. The way these rewards are distributed can also differ. Some tokens are minted or transferred directly, while others may involve the burning of existing tokens or the generation of yield-bearing assets. Access to these rewards also varies: users can be rewarded by simply holding the token, soft locking, locking or staking/delegating the tokens within a network.

These locking mechanisms can vary considerably across different protocols. So, we'll concentrate on three core aspects to give you a comprehensive view of the current tokenomics landscape: reward access, value, and distribution. It's essential to recognize that while these options provide avenues for reward participation, they should be tailored to the individual protocol's design and objectives.

Instead of adopting the unique terminology of each protocol to describe various token strategies and models, we'll use a standardized approach for clarity and ease of comparison. In this article, we'll use the following terms:

Following our framework, the protocols included in this article fall in the following categories:

Access to rewards

Access to rewards through holding

A handful of platforms, including Euler Finance, MakerDAO, and, until very recently, dYdX, have rewarded users for simply holding their tokens.

dYdX, a prominent derivatives exchange within DeFi, stood out by granting holders of \$DYDX reduced trading fees, making it an appealing choice for active traders. However, beginning on September 29, 2023, dYdX initiated the transition back to standard fee structures for all traders. While \$DYDX primarily functions as a governance tool for the platform, it's worth mentioning that token holders could previously stake their tokens in the Safety Module to enhance the protocol's security. Nevertheless, this fund ceased operations as of November 28, 2022.

Euler Finance, operating within the lending sector of DeFi, entrusts \$EUL governance token holders with the authority to influence \$EUL liquidity incentives and guide the platform's direction. However, users have to stake their $EUL to participate in gauge voting, not actually reaping any direct rewards if they just hold.

The \$MKR token from MakerDAO serves a dual purpose. Firstly, it empowers \$MKR holders to actively partake in governance decisions, allowing them to shape the platform's future by voting on critical variables. Secondly, \$MKR serves as a protective measure for the protocol during periods of significant market volatility when loans become undercollateralized. In such cases, new \$MKR tokens can be minted and exchanged for \$DAI. Although MakerDAO lacks a distinct revenue mechanism, \$MKR holders indirectly benefitted from excess \$DAI generated through stability fees, as this surplus \$DAI could be used to acquire and burn \$MRK tokens, reducing the supply. With the recently launched Smart Burn Engine, \$MKR tokens will be accumulated in the form of Univ2 LP tokens instead of being acquired and burned. Maker will periodically use \$DAI from the Surplus Buffer in order to acquire \$MKR tokens from the Univ2 \$DAI/$MKR market. Acquired \$MKR tokens will then be matched with additional \$DAI sourced from the Surplus Buffer and supplied to the same market. In return, Maker will receive LP tokens and increase on-chain liquidity for \$MKR over time.

Several other protocols employ a buyback and burn mechanism that indirectly rewards users who hold their tokens. However, as most of these protocols combine this with additional mechanisms, we mention them later throughout the article.

Access to rewards through staking or delegating in a network

Some protocol tokens are used for staking or delegating for the decentralization of a network, a practice that enhances ecosystem security. Staking necessitates token holders locking their assets as collateral, actively engaging in network operations, verifications, transaction validation, and preserving blockchain integrity. This aligns the interests of token holders with the network security and reliability, offering the potential for rewards alongside the risk of losing staked tokens in cases of malicious behavior.

Among the protocols that employ this kind of staking and/or delegating are Mars Protocol, Osmosis, 1inch Network, Ankr, Chainlink and Rocket Pool.

Osmosis provides diverse staking options for \$OSMO token holders, including delegation to validators for network security. Delegators receive rewards from transaction fees based on their staked \$OSMO amount, with deductions for the chosen validator's commission. Stakers, both validators and delegators, earn 25% of newly released \$OSMO tokens for securing the network. Moreover, Osmosis offers Superfluid staking, enabling users to stake bonded LP tokens for \$OSMO pairs for a fixed period (currently 14 days). These tokens keep generating swap fees and liquidity mining incentives, while \$OSMO tokens gain staking rewards. In January, Osmosis introduced an automated internal liquidity arbitrage mechanism, accumulating revenue. Governance discussions are ongoing regarding potential uses of these funds, including the possibility of implementing a burn mechanism that could render $OSMO deflationary. Mars Protocol, also part of the Cosmos ecosystem works in a similar way, where token holders can stake or delegate to play an essential role in securing the Mars Hub network, governing outpost features, and managing risk parameters.

The $1INCH token, the governance and utility token of the 1inch Network, holds its primary utility in Fusion mode. Resolvers stake and deposit 1INCH in a "feebank" contract to enable swap transactions. Users who delegate \$1INCH for securing Fusion mode receive a share of the generated revenue. Once staked, tokens cannot be withdrawn without penalty until the designated lock period expires (default lock period is 2 years). Additionally, \$1INCH holders possess voting rights in 1inch's DAO, enabling them to shape the platform's future.

Ankr's $ANKR token is multifunctional, serving staking, governance, and payments roles in its ecosystem. Staking \$ANKR uniquely involves delegating to full nodes, not just validator nodes, allowing the community to actively choose reputable node providers. In return, stakers share node rewards and some slashing risk. This staking extends to over 18 blockchains. \$ANKR tokens also enable governance participation, where users vote on network proposals, shaping its future. Additionally, \$ANKR tokens are used for in-network payments.

Chainlink's native token, $LINK, serves as the foundation for node operators, enabling individuals to stake \$LINK and become node operators. Users can also delegate \$LINK to other node operators, participating in network operations and sharing fee earnings. \$LINK tokens are used for payment in Chainlink's decentralized oracles, supporting the network's operations. Additionally, \$LINK tokens are utilized to reward node operators offering essential services, including data retrieval, format conversion, off-chain computations, and ensuring uptime guarantees.

Rocket Pool is one of the major players in liquid staking derivatives (LSD), redefining Ethereum's PoS participation without the standard 32 ETH requirement. Rocket Pool introduces \$RPL, which provides insurance for network slashing, enhancing security. "Minipools" only need 8 or 16 \$ETH as a bond, with 24 or 16 \$ETH borrowed from the staking pool. \$RPL collateral acts as added insurance, reducing slashing risks. \$RPL token holders also possess governance rights and \$RPL tokens serve as payment for protocol fees, delivering a comprehensive toolkit in the Rocket Pool ecosystem.

Access to rewards through locking

In DeFi's current landscape, token locking is a central mechanism for accessing protocol rewards, including revenue sharing, boosted APY, and inflationary emissions. There are mainly two forms: single-sided locking, where users lock the native protocol token, and LP locking, requiring liquidity provision with a token pair, typically consisting of the protocol's token and the blockchain's native token, and subsequent LP token locking.

A key aspect to this mechanism is selecting a lock time interval, binding users until it elapses. Some platforms offer extension options for increased rewards, while others allow early unlocking at a heavy reward cost, making it a strategic decision balancing risk and reward. Extended lock periods yield more rewards, like enhanced voting rights and increased protocol revenue shares. This links to commitment and risk mitigation: longer lock periods demonstrate confidence in the protocol, reciprocated with greater rewards.

One widely used locking method is the vote escrow model, pioneered by Curve Finance. When users lock their governance tokens, they receive veTokens, which grant voting rights but are typically non-tradeable. Though vote escrow solutions have grown in popularity among DeFi platforms, a number of issues have arisen that limit their effectiveness. Among these is the risk of centralization, which occurs when a few large holders gain governance control, as seen in the Curve Wars.

Hence, an increasing number of platforms and protocols are enhancing the vote-escrow concept to boost participation and align incentives across the ecosystem. While veTokens are typically non-transferable, some protocols permit token use for purposes like unlocking liquidity or accessing extra yields. Many DAOs now employ vote escrow solutions to manage user involvement and rewards.

In Curve DAO, users lock their \$CRV tokens to earn voting rights, with longer locks yielding more voting power. \$veCRV is non-transferable and can only be acquired by locking \$CRV, with a maximum lock time of four years. Initially, one CRV locked for four years equals one \$veCRV. \$veCRV balance decreases linearly as the remaining unlock time shortens.

As previously mentioned, this model sparked a conflict over voting power, particularly regarding liquidity direction, which Convex led. To vote on Convex proposals, you must lock your \$CVX tokens for a minimum of 16 weeks, and these tokens become inaccessible until 16 full epochs have passed.

Curve forks like Ellipsis Finance on the BNB Chain closely follow this pattern. \$EPX holders can lock their \$EPX for 1 to 52 weeks, and the longer they lock, the more \$vlEPX they receive.

Hundred Finance, now ceasing to operate due to hacks to the protocol, adopted a similar vote escrow model to Curve, even as a lending protocol. Locking 1 \$HND for 4 years generated about 1 \$mveHND, and this balance decreased over time.

Perpetual Protocol, a perpetual futures trading platform, also employs this model. Locking \$PERP into \$vePERP increases governance voting power by up to 4x.

Numerous DeFi protocols have embraced derivatives of the vote escrow mechanism to incentivize user engagement and participation. Burrow employs a similar model to Curve's ve-token, offering \$BRRR holders the opportunity to engage in the \$BRRR locking program. Starlay Finance on Polkadot and Cream Finance across multiple chains introduce a similar concept, where token holders lock \$LAY and \$CREAM, respectively, to obtain \$veLAY and \$iceCREAM tokens. Frax Finance utilizes this model, with \$veFXS tokens granting governance voting power. Similarly, QiDao and Angle implement token locking for governance influence. In the options landscape, Premia and Ribbon Finance offer \$vxPREMIA and \$veRBN tokens, respectively. StakeDAO, Yearn Finance, MUX Protocol, ApolloX, PancakeSwap, Planet, SushiSwap, Prisma Finance and DeFi Kingdoms also integrate variants of the vote escrow mechanism. These protocols vary in how they distribute rewards and the benefits they offer users, a topic we will delve into further in this article.

Additionally, Solidly Labs, Velodrome, and Thena are notable decentralized exchanges that have evolved from the foundations of Curve's vote escrow mechanism, incorporating unique tweaks in their incentive structures through the ve(3,3) mechanism.

Balancer brought an interesting variation to Curve’s model, where instead of locking the protocol’s liquidity mining reward token directly, an LP token is locked. Instead of locking up \$BAL, the liquidity token \$BPT, received for adding to the \$BAL \$WETH 80/20 pool must be locked to obtain \$veBAL.

In their latest update, Alchemix also employs the LP locking variation of the vote escrow model. Staking is in the form of an 80% \$ALCX / 20% \$ETH Balancer Liquidity Pool Tokens to mint \$veALCX.

The $Y2K token is the cornerstone utility token in the Y2K Finance ecosystem, delivering a variety of benefits when locked to become \$vlY2K. Notably, \$vlY2K is represented as a locked \$Y2K \$wETH 80/20 BPT.

Finally, UwU Lend presents a new lending network built on its own \$UWU coins. By combining \$UWU and \$ETH and providing liquidity on SushiSwap, customers get \$UWU-$ETH LP tokens, which can be locking within the dApp for a duration of eight weeks.

In token locking, a key feature is the promise of enhanced rewards such as voting rights as tokens stay locked longer. This encourages extended commitments and greater influence on protocol decisions. What sets protocols apart are the maximum lock periods, ranging from months to four years. Managing veTokens also differs; some use a linear decay model, gradually reducing voting power, as seen in Curve, Perpetual Protocol, Cream, Angle, Frax, and others. In contrast, some maintain governance influence even after the lock ends, as exemplified by Premia.

Various protocols, including SushiSwap, Premia, Ribbon, Yearn, ApolloX, DeFi Kingdoms and Prisma allow for the early unlocking of its tokens but with a punitive action that retains a portion of earned rewards or introduces a hefty penalty fee.

In summary, token locking rewards users with increased governance powers and other benefits, a topic we'll explore further in the article.

Access to rewards through soft locking

Soft locking, often referred to as staking, introduces a slightly different flavor to token locking. Unlike traditional token locks, users aren't tied to fixed predetermined periods; they can unlock at any time. To incentivize longer locks despite this, protocols often use strategies like vote escrow, where rewards increase with the lock's duration and allow immediate exit. Some protocols have cooldown periods during unlocking, or implement vesting schedules, discouraging frequent lock-unlock cycles. In some cases, unlocking fees are introduced to encourage stable and committed user participation while still providing flexibility.

Following the vote escrow approach, Benqi Finance, a lending and liquid staking protocol on Avalanche, distributes \$QI rewards through liquidity mining, which can then be staked for \$veQI. When \$QI is staked, \$veQI balance increases linearly over time to a maximum of 100 times the QI staked. When unstaking, all accrued $veQI is lost. Yeti Finance, an over-collateralized stablecoin protocol, employs a similar system.

Beethoven X, the first official fork of Balancer V2 built on Fantom and now available on Optimism, follows the same principle of accrued veTokens, albeit with the difference of having to lock \$BEETS/$FTM 80/20 BPTs.

Three of the most popular derivatives exchanges, GMX, HMX and Gains Network, offer token stakers a lot of rewards incentivizing them to stake their tokens, however without importing veTokenomics.

Additional protocols like Astroport, Abracadabra, Tarot, and SpookySwap, allow their users to stake their tokens in return for yield-bearing tokens, which accrue value and may be used to redeem back the staked token plus any accrued rewards.

Camelot tried a different approach. \$xGRAIL is a non-transferable escrowed governance token that can be obtained by soft locking \$GRAIL directly, but requires a vesting period before it can be redeemed back for \$GRAIL.

Furthermore, protocols like Liquity, Thales, XDeFi, IPOR and Moneta DAO, all make use of single-sided soft locking as a gateway to protocol rewards.

Just as Beethoven X, other protocols employ LP soft locking. Interestingly, many protocols leverage the LP staking mechanism to incentivize liquidity as insurance to protect the protocol against insolvency. Aave and Lyra are two protocols that support LP staking, as well as single-sided staking, to incentivize liquidity into their safety modules for protection against shortfalls. Users may deposit \$AAVE/\$ETH and \$WETH/\$LYRA LP tokens, respectively, in return for protocol rewards from the insurance pool. These tokenized positions can be redeemed at any time but have a cooldown period. On Notional, users soft lock \$NOTE/\$WETH Balancer LP tokens to receive \$sNOTE tokens. \$NOTE token holders can start an on-chain vote to access 50% of the assets stored in the \$sNOTE pool for system recapitalization in the case of a collateral deficiency.

Similarly, users on the Reflexer protocol are responsible for keeping the protocol capitalized by soft locking \$FLX/$ETH Uniswap v2 LP tokens.

Hegic, a battle-tested options trading protocol, uses the Stake & Cover model, where staked \$HEGIC tokens cover the protocol’s net losses on selling options/strategies and earn net profits on all expired options/strategies. Like Aave and Lyra, staking is used not only to share rewards but also to add protection. The Hegic Stake & Cover (S&C) Pool participants receive 100% of net premiums earned (or losses accrued), which are distributed pro-rata among all stakers. Users can place a request to withdraw at any time and receive their funds at the end of each 30-day epoch.

Finally, a few protocols enable the possibility to choose between a soft and a hard lock.

dForce, a decentralized stablecoin protocol powered by an integrated DeFi matrix, introduced a hybrid model featuring both soft and hard locking. Locking into \$veDF allows user to earn more rewards than soft locking into \$sDF.

On Platypus Finance, a stableswap on Avalanche, users can acquire \$vePTP either by staking \$PTP or by locking \$PTP. By staking, users receive 0.014 \$vePTP every hour for every \$PTP staked (linear accrue). The maximum amount of \$vePTP a staker can receive is 180 times the amount of \$PTP staked, which takes about 18 months. By locking, the total amount of \$vePTP is given from the beginning.

Tectonic allows soft locking \$TONIC tokens as \$xTONIC. The Tectonic Protocol features a 10-day cooldown for withdrawals. It's similar to Tarot, but Tectonic allows its users to maximize their rewards by locking their \$xTONIC tokens.

Rewards - incentives and benefits for stakeholders

Having explored methods for accessing DeFi rewards, let's now delve into the variety of incentives users gain for actions like holding, locking, or staking tokens. These encompass fee discounts, increased yields for liquidity providers, exclusive protocol features, revenue sharing, token emissions, and emission voting rights (gauge voting).

Discount on Protocol Fees

Various platforms offer discounts to users based on their token holdings and actions. For instance, Aave borrowers can reduce their rates by staking \$AAVE tokens, while Premia users with over 2.5 million $vxPREMIA tokens get a 60% fee discount. On PancakeSwap, trading fees can be 5% lower when paid with \$CAKE tokens. Planet provides three tiers for yield-boosting discounts by staking \$GAMMA tokens. \$HEGIC token holders enjoy a 30% discount on hedge contracts. Staking \$APX tokens in the ApolloX DAO lowers trading fees.

Revenue Sharing

Revenue sharing can be a powerful incentive. Many protocols now distribute a portion of their revenue to users who stake or lock tokens, aligning their interests with the platform's success and rewarding contributions to network growth.

Most vote escrow protocols share revenue with stakeholders on a pro-rata basis. Protocols like Curve, Convex, Ellipsis, Platypus, PancakeSwap, DeFi Kingdoms, Planet, Prisma, MUX Protocol, Perpetual Protocol, Starlay, Cream, Frax, QiDAO, Angle, ApolloX, Uwu Lend, Premia, Ribbon, StakeDAO, Yearn, Balancer, Alchemix, Y2K, dForce, Solidly, Velodrome, and Thena all distribute a share of protocol revenue. Typically, about 50% of fees go to shareholders, but governance votes frequently update these distributions, underscoring the significance of voting power. Some exceptions in the vote escrow group are Hundred, Burrow and Sushiswap. Typically, protocol fees are pro-rata among stakeholders. Yet, a few protocols like Starlay, Solidly, Velodrome, and Thena distribute revenue based on stakeholder voting for specific gauges.

Regarding soft locking protocols, most incorporate revenue sharing. Some, like Beethoven X, Benqi, Yeti, and Platypus, which employ vote escrow systems, consider the lock duration and token amount as factors in determining stakeholder revenue shares.

In contrast, Aave, GMX, Gains, HMX, IPOR, Astroport, Camelot, Abracadabra, Tectonic, Tarot, SpookySwap, Liquity, Moneta DAO, Reflexer, XDeFi, and Hegic distribute revenue based only on the amount of staked tokens.

Certain protocols like Lyra, Thales and Notional do not share revenue, but provide users who soft lock and secure their platforms with inflationary emissions.

Inflationary emissions

Regarding inflationary emission, many protocols distribute their reserved community governance tokens to liquidity providers and active users as rewards for participating in the platform. While most protocols reserve token emissions for LPs, lenders, and yield farmers, some use them to reward stakeholders.

As previously mentioned, Lyra, Thales and Notional opt for inflationary emissions instead of revenue sharing. SushiSwap also canceled revenue sharing in its January 2023 tokenomics redesign.

In some cases, both revenue sharing and inflationary emissions go to stakeholders. Protocols like Solidly, Velodrome, and Thena, which implement the ve(3,3) rebasing tokenomics mechanism, follow this approach.

Additionally, Aave, Planet, MUX Protocol, PancakeSwap, and Perpetual Protocol provide emission-based rewards alongside revenue sharing.

Protocols such as GMX and HMX also reward stakers with escrowed GMX and HMX. For these tokens to become actual GMX or HMX, emissions must be vested for one year.

Gauge voting

Vote escrow gave rise to gauge voting, where farming smart contracts accept deposits and reward depositors with emission tokens. Gauge voting empowers stakeholders to influence emissions distribution, guiding the allocation of newly minted tokens in the ecosystem. This control over emissions plays a pivotal role in shaping protocol growth and direction.

Many protocols that adopted vote escrow enable gauge voting, including Curve, Convex, Ellipsis, Platypus, Hundred, Starley, Prisma, Frax, Angle, Premia, Ribbon, StakeDAO, Yearn, Balancer, Beethoven X, Alchemix, Y2K, Solidly, Velodrome, and Thena.

Contrarily, some protocols like Perpetual Protocol, Burrow, MUX Protocol, and QiDao do not utilize gauge voting.

Additionally, Euler Finance lets EUL holders determine EUL liquidity incentives without requiring token locking a priori, although they need to soft lock in the gauges to exercise their power.

Added benefits

Protocols in DeFi often go beyond direct revenue sharing or token emissions, providing users with additional rewards and benefits.

Similar to gauge voting, many protocols following veTokenomics boost gauge emissions for users who also stake liquidity in a gauge. Notable examples include Curve Finance, Ellipsis, Platypus, Hundred, Prisma, Frax, Angle, Ribbon, StakeDAO, Yearn, Balancer, Solidly, and Starley.

Some protocols offer increased yields without gauge voting. Burrow boosts borrow and supply yields, PancakeSwap enhances yield farming on LP tokens, Lyra increases vault rewards for LPs, Thales raises emissions for active participants, Planet boost LP rewards, and ApolloX improves trading rewards.

Certain protocols take a personalized approach to benefits. For instance, Camelot offers a plug-in system where stakeholders choose their benefits from options like shared revenue, boosted yield farming emissions, or access to the Camelot Launchpad.

DeFi Kingdoms provides in-game power-ups as a unique benefit, while Osmosis offers superfluid staking, a method where LP tokens can be used for staking in the network to receive both rewards for securing the ecosystem and provide liquidity.

Governance token

In some DeFi protocols, tokens may lack direct utility like revenue sharing but hold value through governance participation. Notable examples are \$COMP and \$UNI, where the primary value lies in governance. These tokens empower users to influence the protocol's direction, a valuable role in shaping a major DeFi platform. Protocol type seems to play an interesting role, with DEX governance tokens often being better evaluated, even with less value accrual mechanisms, than counterparties on other categories, that make up a smaller portion of DeFi's TVL. The potential for protocol success, token value appreciation or even the promise of future utility can be motivation enough to hold such governance tokens.

Consider \$LDO, which serves as a governance token for Lido. \$LDO holders can actively engage in decision-making by voting on key protocol parameters, thereby administering the large Lido DAO treasury. Similarly, Compound's \$COMP token is well-known in the lending industry, with \$COMP holders able to vote on governance proposals or delegate their tokens to trusted representatives. Uniswap's \$UNI token, which serves as a governance token, has a market cap of more than \$3 billion USD. \$UNI holders have the ability to vote, influence governance choices, and administer the \$UNI community treasury, as well as determine protocol-wide fees.

Other protocols that did not create specific reward mechanisms around their token include Orca, a top decentralized exchange on the Solana network. Similarly, Synapse's \$SYN token is the second-largest cross-chain liquidity network, effectively integrating 18 different blockchain ecosystems. Hashflow's \$HFT token, which allows participation in the gamified DAO and governance platform of the multi-chain decentralized exchange. Finally, as one of the top six liquid staking platforms, StakeWise's \$SWISE token, which is primarily focused on decentralized governance, plays an important role within the Ethereum ecosystem.

Rewards distribution

Previously, we categorized protocol rewards into five main categories: discounts, additional benefits, voting rights, inflationary emissions, and protocol revenue sharing. While the first three reward types come inherently tied to their distribution methods, inflationary emissions and revenue sharing can take on various forms.

Inflationary emission rewards primarily take the shape of minting governance tokens, such as \$AAVE, \$LYRA, and \$SUSHI, among others. However, some protocols offer inflationary emissions in the form of yield-bearing tokens, where the number of original protocol tokens received upon redemption exceeds the amount minted with initially. These include Astroport, Abracadabra, Tarot, and SpookySwap. The appreciation in value of these yield-bearing tokens can stem not only from inflation but also from protocol revenue. Other protocols distribute revenue in its original form, such as fees paid in \$ETH. Alternatively, many protocols employ a buyback mechanism where they acquire their own token from external markets, increasing its value and subsequently redistributing it to stakeholders.

Many protocols opt for the buyback and redistribute mechanism such as Curve (\$CRV), Convex (\$cvxCRV), Perpetual Protocol (\$USDC), Cream (\$ycrvlB), Frax (\$FXS), QiDAO (\$QI), Angle (\$sanUSDC), Premia (\$USDC), Ribbon (\$ETH), StakeDAO (\$FRAX3CRV), Yearn (\$YFI), Balancer($bb-a-USD), Beethoven X (\$BEETS), Gains (\$DAI), HMX (\$USDC), IPOR (\$IPOR), Abracadabra (\$MIM), DeFi Kingdoms (\$JEWEL), PancakeSwap (\$CAKE), Planet (\$GAMMA).

Other protocols reward stakeholders directly in the accrued tokens such as GMX, Ellipsis, Platypus, Starlay, Solidly, Velodrome, Thena, and Liquity.

Furthermore, some protocol implement token burns. Instead of redistributing the protocol token that was bought back, part is burned in order to reduce the circulating supply increasing scarcity and, hopefully, price. By just holding the token, users receive protocol revenue indirectly, since the protocol used accrued rewards to remove those tokens from circulation.

A few protocols that conduct token burns are Aave, Gains, Camelot, Starlay, PancakeSwap, UwU Lend, Planet, MakerDao, Osmosis, SushiSwap, Reflexer, Frax, and Thales.

Final thoughts

While the protocol category distribution covered in this article may not provide a fully representative picture of the entire DeFi landscape, several key insights can be drawn. Specifically, we have mentioned 14 lending, 20 decentralized exchange (DEX), 5 derivatives, 7 options, 5 liquid staking derivatives (LSD), 10 Collateralized Debt Position (CDP), and 9 additional protocols. Naturally, we have also taken into analysis protocols with similar characteristics, but refrained from explicitly mentioning them.

Decentralized exchanges tend to lean towards locking, particularly in the context of the vote escrow model, while lending and CDP platforms exhibit a preference for soft locking. Some CDP protocols still employ hard locks, though. The rationale behind this disparity can possibly be attributed to the greater demand for liquidity in DEXes compared to other protocols. Lending and CDP protocols typically strike a balance between liquidity supply and demand. When the demand for loans is high, there is often an adequate supply to meet it, as interest rates are adjusted accordingly. In contrast, DEX liquidity providers primarily earn trading fees, and competing with established DEXes can be challenging. Consequently, DEXes frequently resort to incentivizing through inflationary emissions, resorting to locking mechanisms to then manage the token supply. Soft locking is the more common approach in DeFi protocols in general, with notable exceptions in the top DEXes. This tendency is influenced not only by the protocol category but also by their age and reputation. Many leading DEXes today were among the first to launch. The decision to lock governance tokens in well-established protocols differs significantly from locking tokens in newer, often experimental, protocols.

Today, the majority of protocols utilizing locking mechanisms opt to reward users with shared revenue rather than relying on inflationary token emissions. This shift represents a significant improvement over the past few years, enabling protocols to consistently incentivize users to act in ways that benefit the community as a whole. It's the clearest method of aligning incentives, although its long-term viability remains uncertain due to regulatory considerations.

When it comes to distributing revenue, a common method employed, whether systematically or sporadically, is token burning. While a lot of aspects around these tokenomics mechanisms are intuitive, others are closely tied to time-to-market considerations and regulatory constraints. From an economic perspective, it seems more logical to buy back and redistribute tokens as deemed suitable, such as to those token holders who have a more significant role in the protocol. However, from a regulatory standpoint, buying back and burning tokens stands as the simplest way to return protocol revenue to token holders without it resembling a dividend distribution and potentially leading to token categorization as securities. Although this approach has been effective for some time, the future of this mechanism remains uncertain. Additionally, some protocols are highly influenced by the tokenomics endorsed at the time of their market entry or token updates. The emphasis on real yield and revenue sharing was a dominant narrative in DeFi, influenced not only by the protocols but also by broader market conditions. It's increasingly challenging to envision a token launching now with a four-year lock period gaining significant traction.

The vote escrow token model has evolved into the most comprehensive approach, encompassing not just token locking but also incorporating voting rights, incentive management, and revenue sharing.

Still, prominent protocols frequently grant voting powers through tokens without clearly defined mechanisms for value accrual. Although governance powers can hold significant value, this approach is not feasible for smaller or recently launched protocols. Even when these protocols rapidly gain adoption and contribute to the DeFi community, enduring the test of time remains the greatest challenge.

Although we have witnessed creative enhancements to tokens in this context, it is evident that the core concept revolves around various forms of token locks offering two or three types of rewards. Let's up our game and explore the potential for new token economies!

Back

Volatility Series Part II - Applying GARCH Models to Crypto Assets

by Carolina Goldstein and Joana Gomes • Wednesday, April 28 2023 • Published at Three Sigma

Introduction

In the first part of our article, we explored the nature of volatility and its importance in the context of cryptocurrency markets. We also emphasized the usefulness of GARCH models for estimating volatility in financial markets. In this second part, we examine the applicability of these models to the cryptocurrency market. Our aim is to conduct a comprehensive analysis of the market's volatility dynamics using advanced modeling techniques that consider a range of specifications, parameters, and lags. Through the estimation of multiple models, we provide an in-depth examination of our results. This article ultimately seeks to advance the understanding of cryptocurrency volatility and provide valuable recommendations and avenues for future research, benefiting investors and researchers.

Methodology

Data

In this study, we utilized daily prices of nine different crypto assets over the entire time span available. The asset selection included Bitcoin, Ethereum, Uniswap, Lido, Curve, Compound, Euler, Aave, and GMX. In order to test whether perceived asset volatility might affect model choice, we included both less volatile and more volatile assets in our selection of assets.

It is useful to take into account the price returns while analyzing price trajectories. They can be calculated either as the percent difference between the price in t and t-1 or as the difference in logs. In comparison to percent difference, using log returns has a variety of advantages, with the symmetry of logarithmic returns being one of the most important ones. Unlike conventional returns, which are asymmetric, logarithmic returns with opposite signs cancel each other out at the same magnitude. This attribute is especially valuable in financial modeling since it ensures that the projected return on an asset over a certain period is unaffected by the direction of price movements. A further advantage of log returns is that they might facilitate more accurate modeling and simplify calculations. The original price, which may change over time, serves as the denominator for percent returns. As a result, it becomes more difficult to compare returns over time. Comparing returns over various time periods is made easier by the normalization of price data with log returns.

The log returns were computed using the following formula $log(returns)=Log(\frac{P_t}{P_{t-1}})$, where $P_t$ is the asset’s price at time $t$.

Modeling

We first chose a normal distribution for the error term in the studied GARCH models, as this is a common practice in the financial industry. However, we also tested for a skewed Student's t-distribution, which is known to better capture fat tails and skewness. The skewed Student's t-distribution is characterized by an additional parameter, the skewness parameter, which indicates the direction and degree of skewness in the distribution. By testing both normal and skewed Student's t-distributions, we can compare the fit of each distribution and determine which one provides a better representation of the data. This approach allows us to assess the sensitivity of our results to the distributional assumptions made in the modeling process.

We used both EGARCH and GARCH models to estimate volatility because GARCH models assume positive and negative news have a symmetric impact on volatility. However, in reality, it has been observed that the stock market impact is usually asymmetric, and negative news tend to affect the volatility more than positive news. However, as we acknowledge in the first part of this article, it is not clear whether this asymmetric response is observable in the cryptocurrency market. Therefore, we compared the results from both models to assess whether the inclusion of asymmetry improves the fit to the data in this particular context.

In this study, we tested three different mean model specifications within the GARCH framework: constant, zero, and autoregressive. A constant mean model implies that the time series has a fixed average level that does not vary over time. This could be appropriate if the series is expected to exhibit this stable, predictable behavior over time. A zero mean model assumes that the time series is centered around zero and is suitable if there is no inherent trend or level in the data. An autoregressive mean model incorporates the past values of the time series to model its mean and can capture the persistence and dynamics of the series. Selmi and Mensi (2017) used a zero mean GARCH model to analyze Bitcoin returns, as they believed there was no inherent trend or level in the data. This approach allowed the authors to focus on modeling the volatility of the returns, rather than the mean. Additionally, the use of a zero mean GARCH model is consistent with the efficient market hypothesis, which suggests that asset prices follow a random walk with no predictable trends or patterns, and that any deviations from a zero mean would be due to temporary shocks or noise in the market. However, it is important to note that the efficient market hypothesis has been subject to criticism, and some studies suggest that there may be some predictability in asset prices, including in cryptocurrency markets. Therefore, while a zero mean model may be appropriate in some cases, an autoregressive mean model may be more suitable in others. For example, Liew and Baharumshah (2018) found that an ARMA-GARCH model provided a better fit for Bitcoin returns compared to a GARCH model with a constant or zero mean.

We reached our estimations using a combination of several mean specifications, distributions, and models, as well as many lags of historical returns, in order to present a full study of the volatility dynamics in the cryptocurrency market.

We used the maximum likelihood estimation (MLE) approach, a statistical method for estimating the parameters of a probability distribution by maximizing the likelihood function, to estimate the parameters of the volatility models. The maximum likelihood estimator determines the parameter values that, given the assumed distributional assumption, make the observed log returns the most likely.

Model selection

To rank the models and ultimately determine which performed better for each selected asset, we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). AIC and BIC measure model performance while taking into account model complexity by combining a term that reflects how well the model fits the data with a term that penalizes the model in proportion to its number of parameters. AIC and BIC are easier to compute than a cross-validation estimate of predictive performance and can accurately select the best model when their assumptions are met. The best model is the one with the smallest AIC or BIC value, which indicates the least information loss relative to the true model. The difference between BIC and AIC is the greater penalty imposed for the number of parameters by the former.

We used a significance level of 5% throughout our analysis to identify statistically significant coefficients in our models. This approach is based on the null hypothesis that no relationship can be found between the independent and dependent variables. Rejecting the null hypothesis and concluding a significant relationship between variables occurs when the p-value is less than 0.05.

Findings

General results

By analyzing a large dataset of daily returns for nine different crypto assets, the study was able to identify the best models for each asset based on the AIC and BIC criteria.

The table below provides a summary of the best model chosen based on both metrics. Throughout the discussion of the results, the different combinations of lags, mean model and residual distribution are noted in the form of (p, q, mean model, distribution).

The initial results show that different assets exhibit unique patterns of volatility and that these patterns are best captured by specific models. For example, considering the AIC, the study found the EGarch(10, 10, zero, skewt) model to be the best fit for Bitcoin, while the EGarch(1, 1, constant, skewt) model was the top choice for Ethereum. These models take into account the asymmetric and fat-tailed nature of cryptocurrency returns, which is often not captured by traditional models like the GARCH with a normal distribution. The study also found that the Uniswap and GMX assets showed a preference for the EGarch(1, 1, zero, skewt) model, which suggests that these assets have similar volatility dynamics. In contrast, the Garch(1, 1, zero, skewt) model was found to be the best fit for Compound and Euler, indicating that these assets have a different volatility pattern than the other assets. Similarly, the EGarch(2, 2, zero, skewt) model was found to be the top choice for Lido, which again suggests a unique volatility pattern.

These demonstrate that the skewed Student's t-distribution consistently outperforms the normal distribution in modeling cryptocurrency returns. This is clearly reflected in the table, which shows that all of the best models identified based on AIC and BIC criteria incorporate this distribution rather than the normal distribution. This finding is not surprising, given the well-known presence of fat tails and skewness in crypto assets, explored in the first part of this article. Moreover, it is important to note that this result is not limited to the best models only; it is evident across all the models that were regressed.

Our analysis revealed that the mean zero model, according to both BIC and AIC, consistently outperformed the other mean models considered, which is consistent with the findings of Selmi and Mensi (2017). This finding indicates that cryptocurrency markets may not exhibit an inherent trend or level in the data, making modeling the volatility of the returns more informative than modeling the mean. Moreover, the lack of statistical significance of the mean coefficient suggests that including a mean model may not be necessary in most of cases. However, for Ethereum, the mean constant model was the best performing model according to AIC, with the mean coefficient being statistically different from zero.

Our findings also revealed that coefficients of models with larger lags were most of the time not statistically different from zero. As a result, models with lags equal to one consistently outperformed the other models, especially when considering the BIC, which penalizes model complexity more strongly. This indicates that volatility is largely influenced by volatility from the previous day and not from days earlier. However, given that the majority of coefficients for higher lags were not statistically different from zero, we could not draw any conclusions about the different types of assets and model selection.

For most of the models analyzed, the gamma coefficient was not statistically different from zero, indicating that there was no significant asymmetry effect. Please refer to part 1 of this article to recall the definitions of the various coefficients used in GARCH estimation. For example, for Bitcoin, this coefficient was statistically different from zero in 3 out of the 15 models analyzed. These models were EGarch(10, 10, zero, skewt), EGarch(3, 3, constant, skewt), and EGarch(3, 3, AR, skewt). For Ethereum, none of the gamma coefficients were statistically different from zero. This finding aligns with the initial understanding that the asymmetry effect may be less widespread in the cryptocurrency market due to the presence of more enthusiastic and less sophisticated traders who take a price increase as a positive trend. These characteristics of the cryptocurrency market should be considered in the accurate analysis and forecasting of its financial time series. However, it should be emphasized that this can be a sign of a recent industry and might not hold true indefinitely. Interestingly, for the cases in which the gamma coefficient was statistically different from zero, the EGarch models still outperformed the Garch models.

Overall, our research provides light on the importance of coefficients being statistically significant from zero in the best-performing models for cryptocurrency. We found that removing these coefficients on numerous occasions led to improvements in both BIC and AIC.

The results indicate that the statistical properties of cryptocurrency returns may be influenced by unique market characteristics. The non statistically significance of mean model coefficients, gamma coefficient, and larger lags suggest that the behavior of some assets may not depend heavily on the past and exhibit less pronounced asymmetry effects, possibly due to limited data availability or the tendency of most assets to mimic Bitcoin.

Indeed we observe many similarities, when comparing the logarithmic returns of different crypto assets. The graphs below display the logarithmic returns of Bitcoin, Ethereum, and Uniswap for the same time period, indicating a similar behavior of returns over time, albeit with varying magnitudes. Bitcoin, being a more stable crypto asset, as expected, has less pronounced volatility magnitude. Nonetheless, returns appear to react similarly across time, with peaks observed around the same month and year across all models. For example, all series show a large spike around May 2021. It is important to recognize the differences between the crypto industry and traditional finance to determine the appropriate models to use for volatility analysis. This is due to the fact that models that perform well in the stock market may not be applicable in the crypto market, where assets are highly correlated. This correlation can impact the development and application of volatility models, requiring a closer look at case-specific details to choose the most applicable model.

Therefore, correctly identifying the non-zero coefficients in EGarch/Garch models seems to be essential to capture these unique features, emphasizing the importance of considering the distinct nature of cryptocurrency data when building models for analysis and prediction. These findings highlight the need for further exploration of the underlying drivers of cryptocurrency returns and the market itself.

Results when excluding the initial observations

We decided to exclude the first 30 days of trading data, based on our understanding of how disproportionately volatile the cryptocurrency market can be in the early stages of a new token or coin. Extreme price volatility may distort the overall picture, making it more difficult to draw significant conclusions. By excluding this period from our analysis, we aim to reduce noise and give a more accurate and trustworthy model of the cryptocurrency's volatility.

Indeed, as seen in the Ethereum log return chart (below), the largest peak occurs within the first few days of the period.

However, the skewed t-distribution continues to outperform the normal distribution in all cases. This finding is consistent with our previous analysis that included the entire dataset. Even when removing the highly volatile initial period of trading, we observed that the Skew t-distribution provides a better fit for modeling cryptocurrency returns. This suggests that the cryptocurrency market exhibits a higher degree of skewness and kurtosis than a normal distribution can adequately capture.

With the exception of GMX, the best model remains the same for both cases. This result indicates that the choice to exclude the initial 30 days of trading data does not significantly impact the selection of the best-fitting model for most cryptocurrencies. However, it is important to note that this consistency in model selection provides confidence in the reliability of the models we have selected. When models match across datasets, it suggests that the models are robust and provide a consistent understanding of the underlying phenomena.

Conclusion

In conclusion, our study highlights the importance of GARCH/E-GARCH models in providing accurate estimates of market volatility in cryptoassets, which is crucial given the large applicability and significance of this metric in the crypto market. The statistically significant non-zero coefficients in the best-performing models are essential in capturing the unique features of this data, emphasizing the importance of considering the distinct nature of assets when building models for analysis and prediction.

Moreover, we concluded that further analysis of short- versus long-term volatility patterns, as well as adopting higher frequency data to capture intraday dynamics, could improve the precision and applicability of these models. By analyzing short-term and long-term volatility, we can account for sudden changes in market sentiment or underlying economic fundamentals, thereby enhancing the accuracy of GARCH models for crypto assets, in particular for forecasting. Furthermore, ensuring a balance between the frequency of data used and the computational demands of the model is crucial, as both too high and too low frequency may negatively impact the accuracy of the model. Nevertheless, considering that crypto assets can be traded continuously while the stock market has opening and closing times, it seems reasonable to test models with different data frequencies.

Another exploration avenue that surfaced is tied to the purpose of measuring volatility. For instance, we are probably only concerned about shorter term volatility if we are analyzing an asset's volatility to determine the risk associated with using it as collateral. On the other hand, we can be concerned with long-term volatility when creating a portfolio. One of our propositions for next steps is to determine whether different models should be utilized for different applications. Recently, some researchers have introduced hybrid methods by combining machine learning methods, such as Artificial Neural Networks (ANN) and Support Vector Regression (SVR), with GARCH-type models to improve the volatility forecasts. As these types of models are known to thrive on abundant data, we are looking forward to analyzing how they perform with higher frequency data in the context of cryptocurrencies, particularly for short term forecasting.

Sources & References

Alexander, C., & Gholampour, V. (2018). Bitcoin: Medium of exchange or speculative assets? Journal of Financial Economics, 130(2), 367-378. https://doi.org/10.1016/j.jfineco.2018.05.005

Box, G. E. P., & Jenkins, G. M. (1970). Time series analysis: Forecasting and control. Holden-Day.

Chan, J. C. C., & Eisenstat, E. (2020). Time-varying price discovery in cryptocurrency markets. Journal of Empirical Finance, 56, 42-58.

Gonçalves, S., & Meddahi, N. (2018). Multivariate GARCH models: A survey. Journal of Applied Econometrics, 33(1), 1-23. https://doi.org/10.1002/jae.2574

Hansen, P. R., & Lunde, A. (2005). A forecast comparison of volatility models: Does anything beat a GARCH(1,1)? Journal of Applied Econometrics, 20(7), 873-889. https://doi.org/10.1002/jae.837

Liew, V. K. S., & Baharumshah, A. Z. (2018). On the returns of Bitcoin investment. Economics Letters, 166, 23-27.

O'Connor, M. (2019). The efficiency of the Bitcoin market: An analysis of Bitcoin market efficiency and volatility using Google search data. Journal of Risk and Financial Management, 12(3), 114. https://doi.org/10.3390/jrfm12030114

Selmi, R., & Mensi, W. (2017). The impacts of terrorism on stock market volatility: Evidence from eight OECD countries. Finance Research Letters, 21, 36-42.

Tsyvinski, A., & Liu, Y. (2018). Cryptocurrencies as an asset class: An empirical assessment. National Bureau of Economic Research. https://doi.org/10.3386/w24877

Back

Volatility Series Part I - Modelling Volatility in Crypto Assets

by Carolina Goldstein and Joana Gomes • Thursday, April 20 2023 • Published at Three Sigma

Introduction

The crypto market has been subject to significant turbulence, as evidenced by events like the market crash in May 2021, which caused its market cap to drop by more than 40% in a matter of days. These sudden price swings, along with other recent news stories like the Silicon Valley Bank's collapse, have been repeatedly emphasising the need for better risk management in the industry. We at Three Sigma, along with other researchers and developers, have been looking at cutting-edge techniques to build more reliable risk models that can better handle these complexities and solve this important problem.

Volatility is a crucial factor that must be considered since it significantly affects all risk models. While it has been extensively researched in the context of traditional finance, the emergence of crypto markets has introduced new challenges that require a deeper analysis of this subject. Crypto assets are characterized by unique features such as large sudden price swings and lack of regulation, which can introduce new risks and uncertainties that cause the need for a revision of the techniques used to account for volatility. In this article, we aim to provide a comprehensive evaluation of the volatility dynamics in the cryptocurrency market by exploring how volatility can be successfully modeled. This blog article is the first of two and here we examine volatility as a statistical metric, its peculiarities in the cryptocurrency market, and the models that have been used to try to explain it, with an emphasis on the GARCH models.

What is volatility and why should we model it?

Volatility is a statistical measure of the dispersion of data around its mean over a period of time. In finance, it typically refers to the extent of fluctuation in an asset's price over time.

In the financial system, volatility is important because it affects the price of assets such as stocks, bonds, and currencies. When there is high volatility in the market, it can lead to significant fluctuations in the prices of these assets, which can create opportunities for profit or pose risks for investors. It can also affect the stability of the financial system as a whole, as large swings in asset prices can lead to systemic risks.

In the crypto industry, considering volatility is even more significant, as most cryptocurrencies are highly volatile due to their relatively small market sizes, lack of regulation, and speculative nature. For example, those who bought Bitcoin in early 2017 saw its price rise from around \$1,000 to nearly \$20,000 by the end of the year, while those who bought at that peak saw its price drop to around $3,000 in 2018. As can be seen in the chart below, these were not even the most significant price movements, which came after 2020.

Volatility is a crucial factor in risk management, pricing, and portfolio construction for both the traditional financial system and the crypto industry. Portfolio managers use it to assess overall portfolio risk, and algorithms use it to determine appropriate trading rates in real-time. Volatility is also used to price options and other structured products, and traders use it to understand potential price movements over the trading day and compute trading costs. In option pricing theory, volatility is used to measure the risk of a security, with higher volatile assets bearing more risk associated with a greater probability of higher returns. The balance between risk and return is known as the risk-return trade-off, which investors must consider when making investment decisions. In addition, volatility itself might be traded speculatively through derivatives.

Volatility properties

Volatility, while not directly observable, is manifested in many asset returns and traditionally exhibits some characteristics. These characteristics, known as stylized facts, include volatility clusters, in which periods of high and low volatility alternate, and fat tails in its probability distribution, indicating a greater likelihood of big fluctuations than what would be expected in a normal distribution.

Moreover, volatility fluctuates arbitrarily and repeatedly, with volatility spikes persisting before settling to a long-term level. It also typically exhibits mean reversion, as it remains within a set range. In addition, the so-called leverage effect typically causes an asymmetric effect in which a large price decline has a greater impact than a significant price increase, resulting in larger spikes of negative than positive volatility. This will be further elaborated later on in the text. Heteroskedasticity, which refers to situations where the variance of residuals is not constant across a range of measured values, is also usually observed.

Similar properties were also detected in the returns of crypto assets. However, empirical studies have indicated that the cryptocurrency market exhibits greater volatility and that its probability distribution presents fatter tails than in traditional financial markets (Tsyvinski & Liu, 2018; Alexander & Gholampour, 2018; O'Connor, 2019).

The chart below shows the logarithmic returns of Bitcoin since its inception, demonstrating the phenomena of volatility clustering and mean reversion. This pattern may be observed for example in the Bitcoin returns chart around 2018, when a period of high volatility was followed by a period of lower volatility.

The images below show quantile-quantile (Q-Q) plots for the S&P 500 and Ether logarithmic returns. This type of plot is a graphical tool that enables us to determine if a data set is likely to have originated from a theoretical distribution like the normal distribution. It is recommended to use a normal Q-Q plot to verify the assumption that the residuals are normally distributed if we do a statistical analysis on that assumption. When two sets of quantiles are plotted against one another, the result is a scatterplot known as a Q-Q plot. The points should form a relatively straight line if both sets of quantiles originated from the same distribution. Nonetheless, when contrasted to a normal distribution, the patterns seen in the charts below are indicative of a distribution with heavy tails. In other words, the distribution deviates further from a normal distribution's predicted level of variance. Disregarding outliers, it is also interesting to note that although the S&P 500 distribution bends upward for a few observations in the right tail, Ether returns exhibit a smooth curve upward. This might be a sign of a left skew in the S&P 500 returns, or an asymmetric distribution, as shown by a larger tail to the left. The aforementioned leverage effect, which refers to the well-established negative relationship between return and future volatility might be responsible for this. It states that volatility rises after large price declines, whereas for upward price movements this effect is smaller. This is typically observed in the stock market, but it has been found to not be as strong in crypto assets (Zhao, Chen, & Zhang, 2022). It is indeed noticeable that the S&P500 returns seem to exhibit this impact more so than the Ether. The cryptocurrency market is less regulated and lacks fundamental drivers, resulting in greater randomness and unpredictability in price fluctuations. Moreover, the asymmetry effect may be less widespread in the cryptocurrency market due to the presence of more enthusiastic and less sophisticated traders who take a price increase as a positive trend. Understanding these unique characteristics of the cryptocurrency market is critical for the accurate analysis and forecasting of its financial time series.

Volatility models

These stylized facts have provided the foundation for the development of various volatility models. Volatility models can be categorized into two primary types: implied or forward-looking volatility models and conditional or historical volatility models. Implied volatility models aim to capture how the market perceives volatility, often with a premium. The methodology to obtain estimates through these models include reverse engineering of option pricing and other structured products. Some use cases include the VIX, which is the most commonly used index for volatility in the S&P 500, and the VI, which is a similar index for crypto assets. In contrast, conditional or historical volatility models utilize past data to estimate volatility. Examples of these models are quadratic returns, standard deviation, beta, the Heston model, and the ARCH/GARCH models. By using past data or market perception, these models provide insight into the volatility of a given asset or market, which can help investors make informed decisions.

The introduction of the ARCH model by Engle in 1982 led to the widespread use of the ARCH family models for volatility modeling and forecasting of economic and financial time series. Over the years, the family has expanded to include a large number of models, with the GARCH(1, 1) model proposed by Bollerslev and Taylor in 1986 becoming one of the most popular ones. To gain a deeper understanding of the various models within the ARCH family, Hansen and Lunde (2005) conducted a comprehensive review and comparison. During the years, numerous extensions of the GARCH model have arisen in an attempt to address some of the drawbacks of the basic GARCH model, such as its inability to reflect the asymmetry between the impact of positive and negative shocks on volatility. The IGARCH model permits the incorporation of long-term dependencies in volatility, whereas the EGARCH model accounts for the leverage impact. The ARGARCH model includes autoregressive elements to account for the persistence of volatility shocks. These modifications have tried to enhance the precision and dependability of financial time series research and forecasting.

The present research is focused on conditional volatility models of the GARCH/E-GARCH variety due to their ability to address a wide range of volatility features. Firstly, these models incorporate lagged volatility terms in their equations, which effectively accounts for the persistence feature commonly observed in financial markets returns. Additionally, GARCH and EGARCH models account for volatility clustering by considering both past returns and past volatility. The basic GARCH model ignores the impact of the signs of residuals on volatility, since it only considers squared residuals. However, as mentioned before, negative shocks tend to affect financial volatility more than positive shocks. To account for this, Nelson (1991) introduced the EGARCH model, which incorporates this effect. By taking into account these various volatility characteristics, GARCH/E-GARCH models have proven to be useful tools for estimating volatility in financial markets.

Given the established usefulness of GARCH/E-GARCH models in estimating volatility in financial markets, as demonstrated by well-known resources such as Zivot and Wang (2006) and Gonçalves and Meddahi (2018), this study aims to explore the applicability of these models in the context of cryptocurrency markets.

The GARCH Class

Following George Box and Gwilym Jenkins’ 1970 publication "Time Series Analysis: Forecasting and Control," in which they presented the ARMA (autoregressive moving average) model and its variations as effective approaches for modeling and forecasting time series data, the ARMA model and its variants have been widely adopted. Since then, the ARMA model has become not only a widespread statistical tool for time series analysis, but also a common technique in applied fields, such as finance, economics, engineering, and environmental research. The ARMA model combines two processes: the autoregressive (AR) process, which models the relationship between an observation and its previous observations, and the moving average (MA) process, which models the relationship between an observation and its past errors. The ARMA model is commonly denoted as ARMA(p, q), where p and q represent the orders of the autoregressive and moving average components, respectively. It is formally described as follows:

$$x_t=\mu+\phi_1X_{t-1}+...+\phi_{p}X_{t-p}+e_t-\theta_1e_{t-1}-...-\theta_qe_{t-q}$$

where:

$x_t$ is the time series value at time t;

$\mu$ is the constant term or intercept;

$\phi_1+...+\phi_p$ are the autoregressive coefficients;

$\theta_1+...+\theta_q$ are the moving average coefficients.

$e_t$ represents a random innovation, i.e random noise for which:

$E[e_t]=0$;

$var(e_t)=\sigma^2_e$, where $\sigma^2_e$ is the variance of the innovation.

The AR(p) process is a type of autoregressive model, where the value of the current observation is modeled as a linear combination of its p previous values (Cryer and Chan, 2008, p. 66). The parameter p represents the order of the autoregressive model, indicating the number of previous time lags included in the model. The AR(p) process is useful for identifying trends, seasonal patterns, and other patterns in time series data, and it can be used to make predictions about future values of the series. It is described by the following equation:

$$x_t=\phi_1X_{t-1}+...+\phi_{p}X_{t-p}+ e_t$$

Similarly, the MA(q) process is a type of time series model where the value of the current observation is modeled as a linear combination of q previous errors (Cryer and Chan, 2008, p. 57-65). The MA(q) process is used to remove the effects of random fluctuations or noise in the time series data, and it is particularly useful for identifying short-term patterns in the data. By analyzing the residuals of an MA(q) model, it is possible to identify any systematic patterns or trends in the data that may be useful for forecasting future values of the series. It is described by the following equation:

$$x_t=e_t-\theta_1e_{t-1}-...-\theta_qe_{t-q}$$

The above-mentioned models entail that the variance of innovations, also known as volatility, remains constant throughout time; this is referred to as homoscedastic variance. Nevertheless, this assumption is often too restrictive for real-world data, as it does not account for modeling of characteristics such as volatility clustering. To address this issue, Engle (1982) proposed the ARCH (Autoregressive Conditional Heteroskedasticity) model, which takes into consideration the heteroscedastic variance of a time series applied to an AR(p)-process. Later, Bollerslev (1986) introduced the GARCH (Generalized Autoregressive Conditional Heteroscedasticity) model, a generalization of the ARCH model.

The ARCH model is a type of time-series equation that extends the ARMA model by adding an extra component to capture the series' conditional variance. Specifically, the ARCH model estimates the conditional variance of the series using only the autoregressive (AR) component of the ARMA model. The ARCH model assumes that the conditional variance of the series at time t is a function of the AR model's past squared errors, or residuals.

Since Engle's original paper, the ARCH model has been frequently utilized in finance and economics to represent time-varying financial data volatility. Several extensions have been applied to the model, including the GARCH model. The GARCH model allows for time-varying volatility in the series, similar to the ARCH model, but it extends the ARCH model by capturing the persistence in volatility. Bollerslev's GARCH model models the conditional variance of a time series using both the autoregressive (AR) and moving average (MA) components of the ARMA model.

The Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model can be derived by combining the ARMA (p,q) process defined above and by using the conditional variance $\sigma_t$

$$x_t=\mu+\phi_1X_{t-1}+...+\phi_{p}X_{t-p}+e_t-\theta_1e_{t-1}-...-\theta_qe_{t-q}$$

assuming $e_t=\sigma_t\epsilon_t$, where $\epsilon_t \sim \mathcal{N}(0,\,1)$ are standardized residuals.

A volatility-GARCH(p,q) process is defined as:

$$\sigma^2_t=w+\alpha_1e^2_{t-1}+...+\alpha_qe^2_{t-q}+\beta_1\sigma^2_{t-1}+...+\beta_p\sigma^2_{t-p}$$

where:

$\sigma^2_t$ is the conditional variance at time t;

$\omega$ is the constant term or intercept;

$\beta_1+...+\beta_p$ are the autoregressive coefficients;

$\alpha_1+...+\alpha_q$ are the moving average coefficients.

The residuals are generally assumed to follow a normal distribution. This is an important assumption, as it allows the use of standard statistical techniques, such as maximum likelihood estimation, to estimate the model's parameters. It is important to point out, however, that the assumption of normality may not hold in practice. In certain cases, alternative distributions, such as the Student's t-distribution or the Generalized Error Distribution (GED), may be preferred, as discussed in the second part of this article. These distributions are more flexible and can accommodate heavy tails or skewness in the residuals distribution.

GARCH (1,1)

The most commonly used GARCH model is the GARCH(1,1) model, which specifies that the conditional variance at time t is a function of the squared error term at time t-1, the conditional variance at time t-1, and a constant term. As described in the equation below, the GARCH(1,1) model has three parameters, which are estimated from the data using statistical techniques such as maximum likelihood estimation. Many studies find that the simple GARCH(1, 1) model provides a good approximation to the observed temporal dependencies in daily data; see Baillie and Bollerslev (1989), Bollerslev (1987), Engle and Bollerslev (1986), and Hsieh (1989) for some of the early evidence.

$$\sigma_t^2=\omega+\alpha_1 e_{t-1}^2+\beta_1 \sigma_{t-1}^2$$

assuming $e_t=\sigma_t\epsilon_t$, where $\epsilon_t \sim \mathcal{N}(0,\,1)$ are standardized residuals.

where:

$\sigma^2_t$ is the conditional variance at time t;

$\omega$ is the constant term or intercept;

$\beta_1$ is the autoregressive coefficient;

$\alpha_1$ is the moving average coefficient.

EGARCH

The EGARCH (Exponential GARCH) model is an extension of the GARCH model, which allows for asymmetric effects of positive and negative shocks on volatility. While both models incorporate the ARMA components to model the conditional variance of a time series, the EGARCH model allows for non-linear effects of the ARMA components. This non-linearity allows the EGARCH model to capture more complex patterns in the conditional variance, including asymmetry in the response of volatility to positive and negative shocks.

Furthermore, unlike the GARCH model, the EGARCH model imposes the restriction that the sum of the coefficients of the AR and MA terms must be less than one in order to ensure the model's stationarity. This restriction can be seen as a disadvantage of the EGARCH model as it limits the flexibility of the model, but it also ensures that the model is well-behaved and can be easily estimated. This model is described by the following equation:

$$log\ \sigma_t^2=\omega+\sum_{i=1}^q [\alpha_ie_{t-i}+\gamma_i(|e_{t-i}|-E|e_{t-i}|)]+\sum_{j=1}^p \beta_j log\ \sigma_{t-j}^2$$

assuming $e_t=\sigma_t\epsilon_t$, where $\epsilon_t \sim \mathcal{N}(0,\,1)$ are standardized residuals.

where:

$\sigma^2_t$ is the conditional variance at time t;

$\omega$ is the constant term or intercept;

$\beta_1+...+\beta_p$ are the autoregressive coefficients;

$\alpha_1+...+\alpha_q$ are the moving average coefficients;

$\gamma_1+...+\gamma_q$ are the leverage effect coefficients;

$E|e_{t}|$ is the expected value of $e_t$

The GARCH universe has evolved over time from the ARMA model to the ARCH and GARCH models, each of which are more complex. These models allow modeling of time-varying volatility and volatility persistence in financial data. As the financial markets continue to evolve, so do the GARCH models, with newer extensions such as EGARCH, TGARCH, and IGARCH being developed to capture different volatility features.

Conclusion

Volatility is a crucial factor in risk management, pricing, and portfolio construction in both traditional financial systems and the cryptocurrency industry. However, the unique features of the crypto market highlight the importance of studying a tailored approach to modeling and forecasting volatility in that context. To achieve this, this study focuses on the GARCH/E-GARCH models, which can account for various volatility features such as persistence, clustering and asymmetry. The GARCH family of models has proven to be a valuable tool for estimating volatility in financial markets, and this article examines its applicability to the cryptocurrency market.

In the next part of this article, we will provide a comprehensive assessment of the cryptocurrency market's volatility dynamics. This is achieved by estimating models that better account for volatility using various specifications, parameters, and lags. We will describe our methodology, discuss the results, provide recommendations, and outline future research directions.

Sources & References

Alexander, C., & Gholampour, V. (2018). Bitcoin: Medium of exchange or speculative assets? Journal of Financial Economics, 130(2), 367-378. https://doi.org/10.1016/j.jfineco.2018.05.005

Box, G. E. P., & Jenkins, G. M. (1970). Time series analysis: Forecasting and control. Holden-Day.

Chan, J. C. C., & Eisenstat, E. (2020). Time-varying price discovery in cryptocurrency markets. Journal of Empirical Finance, 56, 42-58.

Gherghina, Ş.C. & Simionescu, L.N. (2023). Exploring the asymmetric effect of COVID-19 pandemic news on the cryptocurrency market: Evidence from nonlinear autoregressive distributed lag approach and frequency domain causality. Financial Innovation 9, 21. https://doi.org/10.1186/s40854-022-00430-w

Gonçalves, S., & Meddahi, N. (2018). Multivariate GARCH models: A survey. Journal of Applied Econometrics, 33(1), 1-23. https://doi.org/10.1002/jae.2574

Hansen, P. R., & Lunde, A. (2005). A forecast comparison of volatility models: Does anything beat a GARCH(1,1)? Journal of Applied Econometrics, 20(7), 873-889. https://doi.org/10.1002/jae.837

Liew, V. K. S., & Baharumshah, A. Z. (2018). On the returns of Bitcoin investment. Economics Letters, 166, 23-27.

O'Connor, M. (2019). The efficiency of the Bitcoin market: An analysis of Bitcoin market efficiency and volatility using Google search data. Journal of Risk and Financial Management, 12(3), 114. https://doi.org/10.3390/jrfm12030114

Selmi, R., & Mensi, W. (2017). The impacts of terrorism on stock market volatility: Evidence from eight OECD countries. Finance Research Letters, 21, 36-42.

Tsyvinski, A., & Liu, Y. (2018). Cryptocurrencies as an asset class: An empirical assessment. National Bureau of Economic Research. https://doi.org/10.3386/w24877

Zivot, E., & Wang, J. (2006). Modelling financial time series with S-PLUS. Springer Science & Business Media. https://doi.org/10.1007/0-387-28663-0

Zhao, X., Chen, W., & Zhang, Y. (2022). The impact of economic policy uncertainty on cross-border mergers and acquisitions: Evidence from China. Journal of Finance and Economics, 11(1), 1-20. https://doi.org/10.1186/s40854-022-00430-w

Back

On Tokenomics Series Part II - The Game of Supply and Demand: The Demand

by Carolina Goldstein and Joana Gomes • Tuesday, April 4 2023 • Published at Three Sigma

This is the second part of a two-part blog post about how supply and demand affect token design. The first part explained how to match the project's goals with your token and manage the supply side, while this second part provides a deeper understanding of the mechanisms that drive demand and how to balance both sides of the scale. Read the first part here.

Demand

Token demand is driven by the benefits the token provides and the expectation that these benefits will grow. It is certainly a less objective topic than token supply and the key lies in determining what mechanisms most drive demand for a token in its particular environment. For users to demand a token, they must find that it comes with some utility. Below, we explore some mechanisms that you can use to give a token economic utility.

Store of value, unit of account, and medium of exchange

Tokens can generally serve as a store of value, unit of account, and medium of exchange. The ability to do these basic functions is a fundamental characteristic that contributes to its overall value. A store of value refers to the ability of the token to maintain its purchasing power over time. This means that the token should be able to hold its value and not be subject to significant volatility, making it a reliable asset to hold.

A unit of account refers to the ability of the token to be used as a measure of value for goods and services. This means that the token can be used to price and quantify the value of different assets within the ecosystem. This enables easy comparison and tracking of value within the ecosystem and is a key component in facilitating trade and commerce.

A medium of exchange refers to the ability of the token to be used as a means to facilitate transactions. This means that the token can be used as a form of payment for goods and services within the ecosystem. This is important for the token to be widely accepted and used within the ecosystem, which helps create a functional economy.

Overall, the ability of a token to serve as a store of value, unit of account, and medium of exchange is a key part of any ecosystem and adds value that grows as the ecosystem grows. As more people use the token and more goods and services are priced in it, the token becomes more valuable and useful, creating a positive feedback loop that encourages continued growth and development.

Revenue sharing

One of the most direct ways you can give a token utility is to use it as a vehicle for revenue sharing. You can distribute part of the project’s revenue back to token holders, who can have a right to, for example, a portion of fees generated by the platform. In an AMM, for example, this can refer to a small percentage of each trade that traders pay the platform for allowing their trade. Such revenues can be distributed on-chain periodically, akin to on-chain dividends, or as they come. It is important to consider that the form of this revenue distribution should be able to retain its value to influence demand, as the distribution through an illiquid token, for example, might not have the same impact. They can also be distributed through a buyback-and-burn mechanism in which the proceeds are used to remove tokens from circulating supply to increase the perceived value of remaining tokens. In this case revenue sharing would be directly influencing the supply and not the demand of a token.

Some tokens with revenue-sharing functions are thought by legal experts to be classified as securities via regulators, which brings problems for functioning in US markets. In the US, tokens that are sold as investments and provide a return on investment, such as through dividends or buybacks, are considered securities. Therefore, they are subject to securities laws and regulations. The main problem with this is that in order to legally sell securities, the issuer must either register the offering with the Securities and Exchange Commission (SEC) or qualify for an exemption. This can be a costly and time-consuming process, and many projects may not be able to meet the requirements. Additionally, even if a project is able to register or qualify for an exemption, they still must comply with ongoing reporting requirements and other regulations. This can be a significant burden for projects and may discourage them from using the revenue sharing token model.

Governance rights

Governance rights can help you bring a significant amount of utility to a token. Governance rights refer to the ability for token holders to participate in the decision-making process of the protocol or platform that the token is associated with. This can include things like voting on proposed changes to the protocol, participating in community decision-making, or even having the ability to propose new initiatives.

The utility of governance rights can be seen in a few key ways. First, governance rights give token holders a sense of ownership and control over the protocol or platform. This can create a sense of community and engagement that encourages long-term participation and investment.

Second, governance rights can be used to align the interests of token holders with the interests of the protocol or platform. When token holders are able to participate in decision-making, they are more likely to act in ways that benefit the protocol or platform, and therefore benefit the value of their own tokens.

Third, governance rights can be used to create a more efficient and responsive ecosystem. When token holders are able to participate in decision-making, they can provide valuable feedback and input that can help improve the protocol or platform. This can help create a more effective and sustainable ecosystem over time.

Overall, you can use governance rights to bring a significant amount of utility to a token by creating a sense of ownership, alignment of interests and creating a more responsive and efficient ecosystem. The perceived utility will vary depending the size and scope of addressable decisions, the amount of protocol resources being managed, and the ability to influence the outcome. The value of governance is therefore hard to quantify, and it may be drastically more appreciated in some DeFi categories than others, e.g. when there is a direct link to monetary gains by directing liquidity rewards in decentralized exchanges. It should also be mentioned that governance powers may not always be uniformly distributed, which can also affect the perceived value of governance. For example, the case where holders of tokens that are still vesting are granted governance rights can be problematic, as their time horizon is not the same as the actual active participants and this might misalign their interests.

Reserves

Tokens can be backed by and redeemable for other assets, such as fiat currency, other cryptocurrencies or even real-world assets, providing them inherent value. Stablecoins are an example of this, but not the only one. Some tokens are backed by assets in reserve even if they are not pegged to the value of these assets. Olympus, for example, introduced the token OHM. There was $1 worth of assets in reserve for every OHM, and if the price of OHM ever reached this value, the reserve should be used to buyback tokens. In this case, however, the assets were not redeemable, until the mechanism of inverse bonds was introduced by the end of 2022. Inverse bonds allowed Olympus to create a means to bring OHM out of circulation, which increases the backing per OHM on every inverse bond that is executed. OHM is essentially given back to the treasury in return for treasury assets, but only when token price is below liquid backing.

By holding reserves of the underlying assets, you can provide stability for the token's value and reduce volatility, which can increase demand for the token by making it more attractive as a store of value. If a project can create a perception that the token is a store of value, this can increase the perceived value of the token, and therefore increase the demand for the token, as it becomes more attractive as a means of preserving wealth. By holding redeemable reserves of the underlying assets, you can ensure that the token can be redeemed for the underlying assets at any time, which can increase demand for the token as it provides an easy way to convert the token into other assets, making it more attractive for investors and traders. The perception that the token is backed by tangible assets can also increase the perceived value of the token and increase trust in the project, and therefore increase the demand for the token. This should however be part of the considered mechanisms for adding value to the token and not in isolation. It's worth noting that the size, management and transparency of the reserves are important factors that can affect the demand as well.

Benefits, boosts, discounts and airdrops

You might reward token holders for just holding or staking their tokens. This might come in the form of airdrops of more tokens or of a token of a related project, boosted liquidity mining rewards or discounted platform fees. You can also introduce gatekeeping mechanisms, when only holders of that token can have certain benefits such as access to certain services or other assets, e.g. the possibility to mint an NFT. These mechanisms can contribute to token awareness, which can also increase demand for the token. It is however important to remember that the design of the benefits, boosts, discounts, and airdrop programs and their management can have a great impact.

Penalties

You can choose to retain demand by imposing penalties for negative behaviors, for example through staking withdrawal penalties or taxes on selling, encouraging token holders to not part with their tokens. When you use the staking mechanism by imposing penalties for withdrawing tokens before the end of the staking period, you can create an incentive for token holders to hold on to their tokens for the long term. On the other hand this mechanism has to be well-built, so that it does not create selling pressure if a large quantity of tokens reaches the end of the staking period at the same time. A token tax is a mechanism, where token holders are charged a percentage of their tokens when they sell them on the open market. This can also discourage token holders from selling their tokens.

Marketing and community building

Tokens can evoke certain emotional and behavioral contexts, such as a sense of community and the belief that others will pay more for the token in the future. Active marketing campaigns on twitter, community engagement on discord and other forms of community building can be consciously explored to boost these effects and impact demand. The value of these approaches is hard to measure, but it is important to consider as it can have a significant impact on the demand for a token.

Some emotional and behavioral situations, such as fear of missing out can create a feeling of urgency that can increase the demand for the token. Tokens can also evoke a sense of community among token holders, as they are invested in the same project and share a common goal. This in turn creates a sense of belonging which can make token holders more likely to hold onto their tokens. As the project and token gains more adoption and recognition, holders believe that others will pay more for the token in the future, which can create a feeling of anticipation that can increase the demand for the token.

It is also worth noting that this perception can change quickly with market conditions and that a sense of community may not be sustained during a bear market.

Remarks on the demand side

Given the factors that affect demand discussed above, it is natural that demand increases over time along with an increasing awareness and usage of the protocol, which is expected to increase token utility. It is also important to keep in mind that although some aspects of token demand are fundamental and straightforward, a lot of it is also speculation. These speculative components can be very volatile and difficult to address directly. Being able to appropriately measure the pressure that is caused by supply and balance it with any of these mechanisms by influencing demand is key to maintaining a sustainable token. In turn, choosing between which mechanisms are appropriate in each project, ecosystem, and global context is also very important.

Balancing it all

For us, tokenomics design is about integrating game theory with the economic forces that determine supply and demand. Game theory should help you design the key mechanisms that allow the token to further the project’s goals. It should be used in conjunction with the supply and demand model to design better tokenomics by analyzing the strategic interactions between different stakeholders and how they may influence the overall token economy and the ecossystem. For example, in the context of token issuance and distribution, game theory can be used to analyze the effects of different token issuance models, such as a fixed supply or inflationary supply, on the overall token economy. You could also employ it to analyze the effects of different token distribution mechanisms, such as mining, staking, or airdrops, on the distribution of token ownership and the incentives of different stakeholders. By understanding how different stakeholders may react to different issuance and distribution mechanisms, game theory helps you identify models that align incentives and promote sustainable tokenomics. In the context of token demand, game theory should be used to analyze the effects of different governance mechanisms on the decision-making power of different stakeholders and the overall direction of the project. This can help you identify governance models that align the incentives of different stakeholders and promote sustainable demand for the token.

Fine-tuning the equilibrium

This article has examined the primary dynamics that drive supply and demand in the context of blockchain projects as well as game theory's application to ensure that the interests of all stakeholders are aligned. Even after considering all of these essential components of token design, the rewards, fees, and other factors still need to be fine-tuned or even optimized. The precise values you should choose are frequently determined later in a project through governance proposals, but it is essential for these to be selected in a cautious and deliberate way. For example, when considering token emissions, it is crucial to employ forecasting techniques before issuing tokens in order to have a clear grasp of how the token value is anticipated to fluctuate as well as the repercussions of this price movement for the project. We elaborate on this effect through the next paragraph to illustrate the kinds of considerations involved in the fine-tuning process of tokenomics.

When you increase the supply of a token, how much do you expect the token value to decrease, all else equal? It is unlikely that the token value drops just as much as supply increases in a 1:1 relation. Indeed, a recent study observed that an increase in supply might still grow a token’s market cap. Market cap is the total supply times the price, so this would mean that an increase in supply on average is not fully cancelled out by the corresponding price drop. This is known in the stock market as the stock split premium. When stocks get split, the market cap often increases, as buyers’ entry barriers are reduced, causing increased demand and a growing holder base that promote liquidity and, hence, a higher price. In crypto, this effect is stronger, as when projects use new token emissions for airdrops or as rewards to attract new users, the holder base is often directly increased without going through a secondary market. As an example, this implies that it could make sense for a project to emit 10% more tokens and give some of them to existing holders so that they the price effect does not apply to them, while giving the rest to potential new users and enjoy the best of both worlds. Exactly what part should be given to existing holders so that they don’t experience a loss of value is an important calculation. One study concluded that, on average, on blockchain projects, when supply increases by 10%, the price drops by 5%. If one was to follow this, then half of the emitted tokens should be given to current holders. Naturally this is on average, and it is relevant to consider other factors such as the general market context, the market cap and what type of project it is.

The context is also important to consider, as “all else equal” does not truly exist. It is agreed upon that supply increases have less impact on prices in bull markets, as demand is higher and can help balance this impact. One article shows that a 10% supply increase leads to almost 7% price drop on average in a bear market, while only a 3% price drop in bull market. It is interesting that this negative relation is most common for established tokens with a large market cap. For micro-cap tokens, i.e. tokens with a market cap below $1.5 million, an increase in supply may actually lead to price growth, as increasing the holder base increases liquidity and secondary market demand. This effect also depends on the direction of the supply change. A decrease in supply seems to be more relevant than an increase in supply. This might be closely related to the fact that supply decreases due to token burn or buybacks are usually highly marketed, while token issuance can largely go unnoticed.

Curve as an example

Curve is an automated market maker that launched in January 2020 and offers low-slippage trading on Ethereum and other supported chains. It incentivizes pool liquidity by paying out a yield to LPs from trading fees and CRV emissions. In August 2020, Curve launched the Curve DAO and its native CRV token, which allows holders to participate in governance by locking their CRV into non-transferable, vote-escrowed CRV (veCRV). Voting power is determined by the amount of CRV locked and the duration of the lock, ranging from one week to four years. By directing CRV emissions to the pools of their choice through a weekly gauge vote, veCRV holders can incentivize liquidity using CRV rather than their native token. Additionally, locking CRV allows holders to earn 50% of all trading fees on Curve and boosted LP rewards, with the boost increasing proportionally with the time-lock, up to 2.5x for a four-year lock. This vote-locking mechanism creates a flywheel where the more CRV one has, the more they can accumulate in the future. It also removes CRV from the circulating supply, supports its price, and allows those who control veCRV to direct liquidity. Curve's CRV token exemplifies the nuance and lack of absolutes in tokenomics design, with a current total supply of only ~52% of its maximum supply. However, the ability for veCRV holders to direct future CRV emissions through governance provides utility and creates strong demand for CRV, offsetting the incremental liquid supply created from future emissions. Overall, Curve seems to represent a well-calibrated balance of supply and demand to support its objective of deep liquidity within a sustainable economic model.

Sources & References

Rosic, A. (2020). What is Cryptocurrency Game Theory: A Basic introduction. Retrieved fromhttps://blockgeeks.com/guides/cryptocurrency-game-theory/

Fernando, J. (2023). Law of Supply and Demand in Economics: How It Works. Retrieved from https://www.investopedia.com/terms/l/law-of-supply-demand.asp

Supply and demand. Retrieved from https://www.britannica.com/topic/supply-and-demand

Revenue sharing token. Retrieved from https://smithandcrown.com/glossary/revenue-sharing-token/

OlympusDAO. (2022). Introducing Inverse Bonds. Retrieved from https://medium.com/@olympusdao/introducing-inverse-bonds-9199cc7bf089

Tascha. (2022). The Curious Effect of Token Supply on Price. Retrieved from https://taschalabs.com/the-curious-effect-of-token-supply-on-price/

Back

On Tokenomics Series Part I - The Game of Supply and Demand: The Supply

by Carolina Goldstein and Joana Gomes • Tuesday, March 28 2023 • Published at Three Sigma

Introduction

Tokenomics, or the study of how tokens are created, distributed, and used within blockchain protocols, is a crucial aspect of the cryptocurrency and blockchain space. Tokens play a significant role in the success of many blockchain projects, and understanding the complex dynamics at play is essential for creating a sustainable and thriving ecosystem. In this article, we will explore the supply and demand framework for tokenomics, which allows stakeholders to make informed decisions about creating and distributing tokens in a way that maximizes value and utility while considering the incentives of all participants involved.

We will discuss how the supply of tokens is determined by the total number of tokens and the rate at which they are created or destroyed, while demand is determined by the number of people who want to use or hold the token and the value they assign to it. By examining the factors that influence the balance between supply and demand, we can design better, more useful, and efficient tokenomics. This post has been separated into two parts for easier reading. This first part explains how to match the project's goals with your token and manage the supply side, while the second part will provide an understanding of the mechanisms that drive demand and how to balance both sides of the scale.

If you’re unfamiliar, first read up on the concepts of game theory and supply and demand.

Token design

Designing a token involves outlining, validating, and optimizing it to meet the needs of the protocol and stakeholders. First, determine if a token is needed and if it furthers the protocol's objectives. Then, identify the token's functions and how it captures the protocol's value. Tokenomics design focuses on incentives and mechanisms and this design process is iterative, examining proposed mechanisms and analyzing their impact. Parameterizing the mechanisms is another important task, as competitive fees and expected rewards can make or break the design. Lastly, tokenomics is not only about creating and promoting value and utility but also sustaining it, which safeguards against downward pressures and creates a virtuous cycle. Different frameworks help guide this process and divide tokenomics models into groups, such as deflationary, inflationary, dual-token, and asset-backed models. We, in turn, believe the key framework is the understading of supply and demand. We will elaborate on this later on, but first we elaborate on how to bridge the project’s and the token’s goals.

Game theoretical goal setting

Starting with a clear goal is essential for any blockchain project, as it helps identify the key behaviors required for success. For example, a decentralized lending protocol needs participants to deposit liquidity to lend specific assets. Designing a token policy that incentivizes these behaviors can be achieved through game theory. This might influence decisions such as initial distribution, monetary policy, governance, and how it is marketed. The supply and demand framework is necessary after this to ensure a balance in these mechanisms and aid in achieving these goals.

To give a practical example, let’s imagine a scenario where a project is launching a decentralized exchange (DEX) to enable trades between any tokens with the lowest slippage by ensuring deep liquidity. Now, to ensure deep liquidity, there have to be participants depositing funds in the protocol. If the goal is to incentivize users to deposit into the protocol, you can use game theory to build the reward system in such a way that providing liquidity is an optimal strategy for a certain type of user. This is most easily done through the token, by using it as a reward for those who deposit liquidity into the protocol, effectively incentivizing liquidity providing. Given this goal and the token’s role in achieving it, it is clear that this cannot be designed with a strict maximum supply that is all initially distributed or locked, because there would be no supply left to be distributed over time. This distribution over time will, however, undoubtedly inflate the token as the supply increases. This is a very common practice nowadays and some projects give out as much as $10m each month in token emissions. Some have been able to reduce this by using other projects’ tokens as incentives. Since all tokens benefit from having more available liquidity, it is rational for projects to want to incentivize liquidity for their own token in a popular DEX. But naturally, if the perceived value of a token fails to grow as emissions grow, the token will lose its value. You have to design it in such a way that the demand will continuously absorb this selling pressure. This can be done through various mechanisms, for example by sharing part of the project’s revenue with token holders, making it a medium of exchange, or associating it with growing, real benefits. These would be a few ways to counterbalance an inflating supply, as more value would be attributed to these over time as the project grows. This is further explored in the upcoming section about demand.

Supply

There are many factors that affect the supply of a token, and these are often set up in advance and codified in the project’s smart contracts. When talking about the supply of a token, one normally differentiates between the initial, maximum, total, and circulating supply. The initial supply refers to the amount of tokens released in the initial launch of a token. Depending on the nature and goal of the protocol, there are different ways to launch the first tokens. The maximum supply is a hard-coded limit on how many tokens will be made in total. As such, it can indicate how much inflation is left. However, it is not mandatory for projects to determine a maximum supply, in which case the token has no supply cap. The total supply is the number of tokens that have already been created subtracted by those that have been burned. This includes tokens that are locked in escrow via staking and unvested tokens held by founders and investors. Naturally, if the token has a maximum supply, the total supply will never exceed it. The circulating supply, in contrast, includes only the tokens currently circulating and not locked. It can be seen as the number of tokens that are immediately available for trade. The circulating supply plus locked tokens sum up to the total supply.

Below we explore important factors to understand supply and how it is expected to evolve.

Allocation and distribution

In general we can divide tokens into those that are initially allocated via a fair launch or not. If you chose a fair launch, the token is created via mining or another community distribution method with no prior allocation, which can result in a strong community. However, it can also be difficult to bootstrap network effects and guarantee a uniform initial distribution. In contrast, the team behind a project can have minted tokens before the public launch, retaining part for team members and investors. Selling tokens to investors allows teams to raise funds for development, although it also creates a large initial imbalance between token holders. The charts below illustrate common initial allocations through the examples COMP and CRV.

A recent paper used agent-based modeling to simulate and analyze how the concentration of governance tokens looked like post several types of fair launches. The results of this study concluded that regardless of the initial allocation, there was always token concentration. It seemed that the fair launch fell short as a mechanism to avoid concentration, as over time, independently of market conditions, the tokens still became concentrated. This is an interesting finding to consider, although it does not mean that an uncareful allocation will not have a negative impact on the token’s price in a short-medium term. There is naturally the risk that the team and investors may later sell large quantities and affect the price. This risk could partly be mitigated with mechanisms such as appropriate vesting schedules.

The token’s distribution refers to an analysis of who currently holds the token and in what quantity, with the goal of understanding the level of dispersion of the token. Regardless of the initial allocation, you should monitor this closely, as the presence of a few large holders can bring about the risk that they could sell and cause a decrease in price. To encourage widespread token distribution, you may employ various tactics, such as setting a per person limit for an ICO or distributing the token through airdrops to ensure broad ownership. However, these tactics are also not fail-proof, and sybil attacks have been extensively used to take advantage of them. The charts below show the token distribution of CRV and UNI. A larger part of UNI (37%) is distributed amongst addresses who hold less than 0.5% of the total supply, when compared to CRV (14%). However, most CRV is locked in voting or vesting escrow (61%), which decreases the risk of large sales.

Vesting

Vesting is the process of locking and distributing purchased tokens within a given timeframe, known as the vesting period. There is usually an initial durational lock called the cliff before tokens begin to be distributed in accordance with the specified vesting schedule, which can lock tokens typically for up to a year before they start vesting. You can choose between different types of vesting schedules, for example the distribution of tokens in equal parts within a certain period of time is known as linear vesting. The time period can be days, weeks, months or even years. The distribution of tokens in random parts within a variety of time periods is known as twisted vesting. For example, let’s say 600,000 tokens need to be vested for advisors. Under linear vesting, 50,000 tokens can be released monthly, completing the entire vesting within a year. If a 6 months cliff is added, an advisor will receive 50,000 tokens every month for consecutive 12 months, starting from the 6th month. But in twisted vesting, 25% of 600,000 tokens can be released monthly for the first 3 months and then 75% of tokens can be released followed by a 6 months cliff. By looking at current projects, you could say the norm of vesting schedules is to have tokens locked for a few months, usually between six months to a year, and then unlocking linearly for a time period between one and three years.

If team members and investors have a gradual vesting schedule, the risk of sudden price movements can be mitigated. On the other hand, short vesting schedules with large cliffs may cause an abrupt increase in supply, increasing the amount of tokens to be sold, which can lead to a consequent decrease in price. There are a few important choices when you consider vesting. These choices might be influenced by various factors, including the type of project, the maximum token supply and the emission schedule planned for the token. After considering the cliff, you must decide what part of the tokens is unlocked and how the next unlocks will look like. Some projects choose a constant vesting, while others go for batch like unlocks with set intervals. While it is mostly intuitive how each kind of vesting can affect the price differently, there are other factors that should be considered in conjunction with this. For instance, one study that compared schedules concluded that larger initial unlocks have less negative price impact than smaller initial unlocks, most likely because teams and VCs have less incentive to sell in the beginning of a project’s lifespan. The goal is naturally to find a vesting schedule that produces the least negative price impact on vesting dates and lowest token volatility over the vesting period. The chart below illustrates how different vesting schedules can look like, considering the liquid supply increase in CRV to DYDX. While CRV employed linear vesting, DYDX uses batch unlocks.

After having thought about all these factors that you can control, you have to keep in mind that all designs will also be affected by exogenous factors. The way early investors deal with the token, market sentiment, and the latest narrative are a few examples of factors that are difficult to measure but important to consider. This leads us to the concepts of market capitalization (market cap) and fully diluted valuation (FDV). The market cap of a crypto asset is its price multiplied by the amount of tokens that are currently in circulation. FDV is another valuation metric, which multiplies the price by the total amount of tokens that will ever exist. This naturally means that the market cap of a token will always be smaller than or equal to its FDV. Market cap can be thought of as an indication of demand in dollar terms (or in another currency the token was priced in). Changes in demand directly impact the price at which participants are willing to buy the token. FDV, however, increases in the same proportion when demand increases, ignoring the amount of tokens that is locked. It is perfectly possible that some token holders would rather sell than hold, but cannot exercise this as their tokens are locked. It is therefore more of a measure of supply. As a project gains traction and the token attracts more demand, it can easily exceed investors’ initial investment by twenty, fifty, hundred or more times. Take the example of a project that raises a round at a \$50m valuation by selling 1% of tokens at the price of \$0.01. Let’s say investors have their allocation locked for one year and in the meantime an airdrop puts 1% of the total supply in circulation. With only 1% of the tokens in circulation in a few months the market cap is at \$5m, which in turn means that FDV is at \$500m. This has raised the valuation by 10 times, which means tokens are now worth \$0.10 and that investors are up 10 times since their initial investment. As the project keeps growing and token demand rises, the price increases 20 times, making token price \$2, market cap \$100m, and FDV \$10b. Seed investors are up 200 times and it is very likely that investors are happy selling at a much lower price, given their initial cost. This begs the question: can investors really not capitalize on this growth while their tokens are locked? In fact they can, in over-the-counter (OTC) markets. Professional investors can trade locked tokens through legal contracts, which means they can sell their locked tokens with a discount and still make a large profit. Of course the tokens remain locked and the new buyer cannot sell them. Locked tokens can trade hands a few times, raising the price at which the last buyer can actually make a profit once they are vested. For this reason, when tokens are finally unlocked, investors’ margins can have decreased a lot. Investors might not sell upon an unlock. In turn this lack of action at a supposedly negative event can be seen as disproportionally positive - the general public will see this as “if even early investors do not want to get rid of their tokens, they must be worth more than current price”. This event effectively just removed an unknown, possibly negative, price reaction, leaving holders more confident. This should illustrate how everything is nuanced, even when looking at a seemingly direct economical relation of supply and price. Naturally, most unlocks do result in token selling, particularly when there is no OTC market or demand for the particular token. Before determining how the supply will look, when designing a token, keep in mind that the size the project is expected to grow to and the demand that you estimate to capture are enough to keep the FDV when the total supply is unlocked. You don't want to be forced to chase after enough demand just to maintain the token value.

Emissions schedule / Monetary policy

Many protocols have a built-in mechanism for increasing the circulating supply, which is often referred to as inflation or emissions. You can do this to incentivize and reward activity such as validation and liquidity providing. The emissions rate can vary depending on the design of the protocol and can be either fixed or variable. This can be referred to as the mint rate, which is the rate at which new tokens are created. This rate helps determine future inflation of the token. If a token has no maximum supply, you can mint new tokens endlessly, while a token with a maximum supply will have a mint rate for the duration of time between their initial launch and the moment they achieve maximum supply of tokens in the ecosystem.

Additionally, you may also employ mechanisms to counter emissions and reduce the circulating supply, such as token burning or staking. Staking requires locking tokens in a smart contract in exchange for benefits such as a share of protocol revenue or the ability to participate in community governance. Token burning is when tokens are removed from circulation permanently and can be based on factors such as fees, price, or timing. The rate at which tokens are destroyed is referred to as the burn rate. Whenever a project finds it appropriate, they may choose to burn a portion of tokens to decrease the supply and benefit token holders through positive price movement. A protocol may also burn tokens or increase burn rate to counter a previous inflationary nature of the token economy.

While in a centralized ecosystem mint and burn rates are decided by the central authority, in a decentralized ecosystem these are often decided by on-chain governance votes utilizing governance tokens held by users.

For example, BNB adopts coin burning to remove coins from circulation and reduce the total supply of its token. With 200 million BNB pre-mined, BNB’s total supply was 157,903,320 as of January 2023. BNB will burn more coins until 50% of the total supply is destroyed, which means BNB’s total supply will be reduced to 100 million BNB. Previously, these burns were periodic, based on the BNB trading volume on the Binance exchange, but it often happened that these buys were frontrun. A new burning mechanism was introduced through the BEP-95 in November 2021, adding a real-time burning mechanism to the BNB Chain. The smart contract automatically burns a portion of the gas fees collected by validators from each block. As more people use the BNB Chain, more BNB will be burned, effectively accelerating the burning process. As BEP-95 is solely dependent on the BNB Chain network, it will continue to burn BNB even after the 100 million burn target has been reached. Binance also introduced the auto burn to replace the quarterly burn, which is a mechanism that will automatically adjust the amount of BNB to be burned based on the price and the number of blocks generated on the BNB Chain during the quarter. Once the total circulating supply of BNB falls below 100 million, the BNB Auto-Burn will stop. This illustrates how different burn mechanisms can be and how you should choose them according to the project’s needs and ecosystem.

Remarks on the supply side

It will always come back to the behaviors that you want to incentivize and the equilibrium between supply and demand. Even though a lower supply can generally make a token's value increase, supply-side tokenomics is not always about taking tokens out of circulation. Instead, it should focus on meeting demand to support and improve protocol goals. For example for a protocol that is trying to grow activity, a token with a rapidly deflating supply could cause token holders to only hold, anticipating a price increase. The protocol would be incentivizing behavior detrimental to its goal. Protocols such as proof-of-stake blockchains intentionally inflate their currencies in part to encourage network validators to join and decentralize the network.

Therefore, the best supply-side tokenomics is the one that seeks to match its demand. Generally, a token that has a large percentage of its maximum supply already circulating, with steady and predictable inflation to encourage usage, a fair launch or pre-mine with a gradual and lengthy vesting schedule, a high allocation for community ownership, and a well-diversified distribution with no overly large holders, is well-positioned to steadily absorb demand as it grows over time.

Read part 2 of this article to understand the mechanisms that drive the demand for a token and how to balance both sides of the scale.

Sources & References

Rosic, A. (2020). What is Cryptocurrency Game Theory: A Basic introduction. Retrieved fromhttps://blockgeeks.com/guides/cryptocurrency-game-theory/

Fernando, J. (2023). Law of Supply and Demand in Economics: How It Works. Retrieved from https://www.investopedia.com/terms/l/law-of-supply-demand.asp

Supply and demand. Retrieved from https://www.britannica.com/topic/supply-and-demand

Fernandez, J., Barbereau, T. and Papageorgiou, O. (2022). Agent-based Model of Initial Token Allocations: Evaluating Wealth Concentration in Fair Launches. https://doi.org/10.48550/arXiv.2208.10271

Stephanian, L. (2022). Optimal vesting structure. Retrieved from https://unlockscalendar.substack.com/p/optimal-vesting-structure

Shahzad, I. (2022). Token Vesting: The Complete Guide to Creating Vesting in Tokenomics. Retrieved from https://medium.com/coinmonks/token-vesting-the-complete-guide-to-creating-vesting-in-tokenomics-bf211b999f2f

Cobie. (2021). On the meme of market caps & unlocks. Retrieved from https://cobie.substack.com/p/on-the-meme-of-market-caps-and-unlocks?utm_source=%2Fsearch%2Fbullish%2520unlock&utm_medium=reader2

What Is BNB Auto-Burn? (2022). Retrieved from https://academy.binance.com/en/articles/what-is-bnb-auto-burn

Back

The SVB Collapse: A Wake Up Call for Better Risk Management

by Carolina Goldstein and Joana Gomes • Wednesday, March 15 2023 • Published at Three Sigma

Circle, the company behind the USDC stablecoin, acknowledged on Friday to having a \$3.3 billion exposure to the now collapsed Silicon Valley Bank (SVB). USDC is one of the most widely used stablecoins with a market share of 44% in February 2023, and it maintains the peg to the US dollar by backing each USDC with a dollar of assets held by US banks and custodians, including SVB. The startup-focused SVB fell amid the biggest banking failure since the 2008 financial crisis, shaking up the markets and leaving billions of dollars' worth of retail, corporate, and investor assets stranded. With a market valuation of more than $200 billion before this event, the California-based corporation was the 16th largest bank in the United States, catering to the financial requirements of technology companies all over the world. But how did this happen?

SVB Financial Group, the parent company of SVB, disclosed on March 8 it had recently sold \$21 billion in bonds, resulting in a quarterly loss of \$1.8 billion after taxes. Many of those bonds had an average yield of 1.79%, far below the current 10-year treasury yield of around 3.9%. In an effort to bolster its finances, SVB reported at the same time that it was executing a \$2.25 billion stock sale. This announcement caused investors to panic, and on Thursday, investors and depositors attempted to withdraw $42 billion from SVB within only 44 hours, virtually leading the bank to fail.

Many have cited Twitter as a key contributing element in this decline, with venture capitalists (VCs) being blamed for spreading mistrust and inciting fear. It is true that with greater communication, SVB might have been able to alleviate some of the anxiety felt. The market was clearly unable to absorb the bank's disclosure of the bond sale loss and enormous fundraising campaign. Claims that everything was "business as usual" quickly backfired, as they sounded all too similar to the events leading up to the 2008 collapse of Lehman Brothers. This was also exacerbated by bad timing, as the statement followed the demise of Silvergate Bank, a prominent financier in the crypto industry, which had already left investors feeling uneasy. There have also been inquiries as to why SVB delayed strengthening its balance sheet until the $1.8 billion loss rather than doing so sooner.

However, the reasons behind SVB's failure go beyond the use of social media and inadequate communication. While these had immediate and noticeable impacts, the real cause of the problem is sustained poor risk management. Every bank or individual is continually exposed to risks, and while some may be tougher to recover from than others, such severe errors might have been avoided through an effective strategy for managing risks. Moral hazard in the bank’s management and mislabeling in accounting have also been mentioned as culprits, but the focus of this commentary is on clear red flags in risk management that should be taken as lessons for the future.

For instance, in the months before to the collapse, the bank reportedly lacked a chief risk officer. Furthermore, although deposits up to US$250,000 are insured by the Federal Deposit Insurance Corporation (FDIC), more than 90% of SVB's client deposits were larger. As a result, this share of SVB’s deposits was not insured, turning it far more susceptible to the risks associated with withdrawals.

Another drawback was the fact that SVB had such an overlapping client base on both lending and depositing. A customer base diversification plan would have been essential to managing risk, especially since the absence of one led to a high concentration of its clients among VCs and failing startups.

Furthermore, SVB had a major investment portfolio relative to its overall assets, with 57% invested (compared to 24% for the typical American bank) and 78% in mortgage-backed securities (as opposed to 30% for Citi or JPM). Banks everywhere have flooded their balance sheets with billions of bonds as a result of LCR legislation, which designed rules to ensure that banks hold a sufficient reserve of high-quality liquid assets (HQLA). While regulation enables this, owning so many bonds on the balance sheet certainly carries risks, particularly interest rate risk. Banks typically hedge the bulk of the interest rate risk since it is costly to hold treasuries if rates increase. Although by December 2021 SVB held \$10 billion in interest rate swaps, by December 2022 nearly all of these hedges had been taken down. As the duration of the bank's sizable portfolio remained the same before and after interest rate hedges, this indicated that the firm presently did not hedge interest rate risk at all. Why is this such a big deal? Economically speaking, a portfolio of \$100 billion in bonds with a 5-year non-hedged duration represents a loss to the bank of \$500 million for every 10 basis points increase in the 5-year interest rate. A loss of \$5 billion would result from 100 bps, while a loss of $10 billion would result from 200 bps. This indicates that SVB was exposing its investors and depositors to a tremendous number of risks by failing to implement fundamental risk management procedures.

Moreover, SVB should have anticipated that rising interest rates would have a significant detrimental affect on deposits given that the tech industry grew during the era of record-low interest rates, especially when it started to become clear in early 2022 that tech valuations were declining. The bank should have conducted a thorough scenario study to assess the risk of deposit withdrawal.

Finally, SVB's collapse serves as a reminder of the importance of risk management. The bank's concentration of its client base, failure to hedge interest rate risk, lack of insurance and absence of a chief risk officer contributed to its downfall. To avoid similar events in the future, banks must take a diversified approach to their lending and deposit customers, prioritize risk management, and engage in adequate scenario analysis to assess potential risks.

This should be a wake up call for stablecoin issuers and other crypto protocols that rely on banks to retain value: risk management goes beyond their own practices and must include extensive due diligence on the banks holding their funds.

Back

The Hitchhiker's Guide to DeFi Insurance

by Carolina Goldstein, Catarina Urgueira and Tomás Palmeirim • Monday, October 31 2022 • Published at Three Sigma

Introduction

Since the early days, the DeFi market has been severely shaken by hacks, bugs, exploits, rug-pulls, flash loan attacks, and a long list of attack vectors, causing loss of confidence in its core value proposition. Insurance solutions that can mitigate the high risk inherent in this industry's innovations are one of the most important aspects for the widespread adoption of DeFi.

Yield and risk are positively correlated, with higher yields indicating market participants' greater exposure to risk. DeFi yields are significantly higher than the ones seen in traditional finance, indicating a greater level of risk. This risk is mainly attributable to the complexity, novelty, and immutability of DeFi, where bugs or smart contract errors can lead to exploits resulting in colossal losses, emphasizing the need for insurance solutions in the industry.

Since risk should ideally be measured automatically and in a decentralized manner using solely on-chain information, developing insurance mechanisms for the DeFi sector is extremely difficult and doesn't entirely fit with what we see in traditional capital markets. Decentralizing the insurance market has the potential to transform the claiming process into one that is unbiased, trustless, transparent, and automated using smart contracts while also providing coverage providers with a return on their capital and insurers with guarantees about the safety of their assets.

The insured cover types, premium pricing, risk management, and claims process vary according to the Insurance protocol implementation and strategy. This paper will examine the Ethereum DeFi insurance sector in depth, examining 12 different protocols, providing a historical review, and comparing their methodology, business models, and tokenomics.

Insurance Market Overview

DeFi automates financial services via smart contracts and has 53 billion in total value locked, with an all-time high of 170 billion in December 2021, with current TVL representing only 31% of the ATH. (Source: DeFi Llama) The rise in TVL is positive for the industry, but it also increases the possible damage caused if that value is lost due to smart contract vulnerabilities.

The first wave of innovation in DeFi focused mainly on two fundamental financial primitives: decentralized exchanges and lending. These two domains account for the vast bulk of the value locked in DeFi protocols, totalling 36.68 billion dollars in TVL, according to DeFi Llama. In contrast, DeFi insurance accounts for only 457 million dollars in TVL, despite significant advances in this segment of the industry. DeFi insurance makes up less than 1% of total TVL in DeFi. Before investing large sums of money in this market, investors may desire a sense of security, and the entire Web3 economy is currently underinsured.

Nexus Mutual, the industry pioneer, dominates the DeFi insurance market since its launch, accounting for over 68% of the TVL, but it only covers 0.25% of the TVL in DeFi. The remaining insurance market is still fragmented, with the three protocols listed after Nexus by TVL accounting for roughly 13% of the market.

What would happen if insurance coverage grows by 10% or 15%? If 10% of the DeFi TVL was insured, the total assets covered would be $5 billion. The current TVL in insurance is nowhere near one billion dollars. A significant increase in DeFi insurance TVL is required to cover 10% of DeFi TVL. Developing a decentralized insurance protocol is substantially complex, and solutions require further work to increase covered value in DeFi.

How does DeFi Insurance work?

Insurance represents a contract or policy where an individual or entity receives financial protection or payment from an insurer in the event of a loss.

Insurance companies’ business strategies rely on diversifying risk, and these businesses usually generate revenue in two ways: by charging premiums and by reinvesting them. Each policy has a premium based on its risk and after it is sold the insurance firm traditionally invests it in safe short-term interest-bearing assets to avoid insolvency.

The global traditional insurance market was valued at more than \$5.3 trillion in 2021. It is expected to grow by approximately 10.4% to \$5.9 trillion in 2022 and \$8.3 trillion in 2026 at a compound annual growth rate (CAGR) of 9.1%. (source: PR Newswire) DeFi insurance represents a significant growth opportunity in the blockchain industry, as its ATH in November 2021 was $1.82 billion, accounting for only 0.03% of the total traditional global market for 2021.

This global insurance market forecast can predict a reasonable coverable value in DeFi. If only 5% of the traditional global insurance market becomes the coverable value in DeFi insurance, this equals \$265 billion. Assuming that 15% of the coverable value is insured, we have $39.75 billion in active premium coverage, significantly more than the current TVL in DeFi insurance and even more than the entire insured value in DeFi.

In the same way as in traditional insurance companies, DeFi insurance protocols can also carefully invest their users' capital in other DeFi products to generate more revenue. Generally, if a company efficiently prices its risk, it should generate more income in premiums than it spends on conditional payouts.

Instead of purchasing coverage from a centralized entity, DeFi insurance protocols allow users and companies to purchase coverage from a decentralized pool of coverage providers. Anyone can be a coverage provider by locking capital in a capital pool and exposing themselves to risk, just as liquidity providers do in lending protocols. Cover providers invest their funds in pools with higher returns relative to the protocol's risk, which means that individuals trade the outcomes of events based on their estimations of the probability of the underlying risk event. If a protocol covered by the insurer suffers an adverse event such as a hack, the funds in the capital pool that covers that protocol will be used to compensate users who purchased coverage against that specific event on that protocol. Coverage providers are incentivized to provide liquidity and are rewarded for assuming risk by earning a return on their capital. The yield is a percentage of premiums paid, presenting a correlation between the premium paid and the risk for the protocol under consideration. However, DeFi insurers often include their own liquidity mining incentives in their yield calculation, which are used to bootstrap liquidity for the pools.

Our DeFi Insurance thesis is that as the total value locked in DeFi grows, so does the need to secure that value. With the TVL growing, users must have access to solutions that protect their capital. This is especially true as institutional players enter the market, since insurance is already a big part of traditional financial markets.

Nexus Mutual was the first insurance protocol in the DeFi industry. Following it, many protocols were launched in an attempt to solve some of the ongoing challenges in this space. In the next sections, we will describe how 12 protocols are attempting to solve some existing challenges in decentralized insurance, as well as provide our inputs on some of the approaches used.

Nexus Mutual

Nexus Mutual launched on Ethereum on May 30th, 2019 as a combination of smart contract code and a fully compliant legal entity based in the UK operating under a discretionary mutual structure, meaning that all claims are paid depending on a decision made by the Board, in this case, the Nexus Mutual members.

A discretionary mutual is not an insurance provider but a legal structure that allows members to trade under the umbrella of a single legal personality. This enables Nexus to disregard all regulatory and legal requirements that exist for insurance companies. This discretionary mutual allows legal trade in the UK, but coverage is available globally, with some countries restricted due to local laws. Anyone who wants to join the mutual in any capacity must go through KYC to ensure compliance, and the membership rights are represented by their native token NXM. This KYC procedure can give institutional users greater regulatory confidence.

Nexus Mutual's first product was Smart Contract Cover, the first insurance product that let users protect themselves from smart contract risks on major DeFi protocols.

In January 2021, Nexus Mutual expanded cover protection to other chains such as BNB, Polkadot, and Cosmos, as well as added protection for centralized platforms such as Coinbase and Binance and lending services such as BlockFi and Hodlnaut.

In April 2021, Nexus Mutual added Protocol Cover, given the ever-evolving scope of DeFi attacks. This broad and versatile protection protects members from smart contract hacks, oracle attacks, severe economic attacks, governance attacks, layer two components, and protocols on any chain.

In July 2021, Nexus Mutual added Yield Token Cover, which provides coverage against the full range of risks to which a protocol, or combination of protocols, LP position is exposed to. This covers smart contract risk, oracle failure or manipulation, stablecoin de-pegs, governance attacks, and any other threat that leads to the protocol losing value, provided it has an LP token representing consumer deposits.

The vast majority of Nexus covers protect users against protocols, accounting for more than 80% of total covers, followed by custodian protection (a little more than 10%) and yield tokens coverage.

Nexus Mutual gain market fit when attracting huge amount of TVL in the first months. It is still the largest insurance protocol in terms of TVL but since mid-2021, Nexus Mutual's written premium, denominated in US dollars, has declined. This could be because new insurance protocols are taking market share from existing protocols, such as Unslashed and InsurAce, since they can provide more economic incentives to users by distributing governance tokens and do not require a KYC process. Other external macro conditions could also have influenced this outcome, which will be further analyzed when other insurance protocols are presented.

Nexus Participants

Nexus Mutual members can buy insurance coverage using NXM, provide liquidity to the capital pool as Cover Providers and/or vote in the claiming process as Claim Assessors. A small membership fee of 0.002 ETH is charged to all members.

Cover Providers are Nexus Mutual members who stake NXM against protocols or centralized exchanges to underwrite insurance and earn 50% of newly minted NXM insurance premiums. Minting NXM requires the addition of ETH to the Capital Pool, which is currently funded by premiums pouring into the pool. This mechanism exists due to the existing bonding curve, which was once the primary trading place for NXM. As a result, the circulating supply of NXM increases, but so does the value of the Capital Pool. Cover Providers are, therefore, only exposed to protocol-specific risks. The rewards are proportional to the amount of capital the cover provider has locked into the pool. Staking does not generate rewards on its own; covers must be purchased for stakers to receive rewards (50% of the premiums) and the protocol to generate revenue.

On the other hand, Claim Assessors are members who stake NXM to evaluate claims submitted by other members and receive rewards for voting in conformity with the consensus.

Claim Assessment

Nexus Mutual is implementing a three-step governance-based approach to claims assessment. In a governance-based design, token-holding claim assessors vote on claim decisions. To submit a claim, the member must stake 5% of the purchasing cover in NXM tokens. This deposit is returned to the member if the claim is approved; otherwise, the tokens are destroyed. After submitting a claim, assessors must vote to approve or deny the claim based on the submitted cover proof. If the claim is approved, cover providers on that pool will have their stakes reduced proportionally to the claim amount. If the stakes are insufficient to cover the claim amount, Mutual will assume the loss by reducing all of its stakes. Claim assessors must lock their tokens for fourteen days before voting on any claim. This encourages a fair voting procedure because members cannot vote on their request immediately after submitting a claim. For a claim to be approved, over 70% of votes must be cast, and the total vote weight must exceed five times the amount of coverage.

All claims are accessible through the Nexus Mutual application and at the smart contract level. If the insurer denies valid claims, it is unlikely that new members will join, and existing customers will not purchase new coverage products.

There are disadvantages to such a mechanism to evaluate claims, such as having a process that requires manual voting, where members can vote to reject a claim to avoid losing their capital and are incentivized to vote with the majority rather than using their judgment. As seen in the governance of other DeFi protocols, only some members want to participate in the voting process actively, so the 70% of votes necessary for a claim to be approved can be challenging to achieve.

The claim payouts in 2022 were mainly caused by the Rari Capital Fuse Market Exploit due to a reentrancy vulnerability, and the Perpetual Protocol v1 economic design failure. As shown in the graph below, Rari Capital paid out 20 ETH and 5,008,000 DAI in April, representing a massive decline in monthly surplus. Nexus Mutual did not pay a single claim related to UST de-peg and Anchor Protocol because the coverage provided was limited to issues regarding the smart contract and did not include UST de-peg.

DeFi incidents require expertise and on-chain data analysis to determine if the insurance policy covers the incident and if the member's wallet submitting the claim was affected. It can be tough for regular users to vote wisely on this. The Advisory Board of Nexus Mutual comprises insurance experts with the necessary expertise to conduct this investigation, which is shared with the community before voting in the form of an investigation summary.

Premium Pricing

Nexus Mutual uses a market-based risk pricing mechanism. Risk is determined by combining a base risk calculation, which is computed using actuarial math, with the total value staked. Essentially, cover providers stake NXM against insurance taken out on a specific protocol to demonstrate their confidence in the protocol's safety. A more significant amount of staked NXM indicates that after risk assessment, cover providers feel comfortable depositing funds in that pool, resulting in a lower risk cost and lower premium for that pool.

In that sense, the premium is entirely driven by the amount of NXM staked by Risk Assessors against each protocol and custodian. More specifically, the pricing formula for each cover is calculated as follows:

$$Cover\ price\ = \ risk\ cost\ *\ (1 + surplus\ margin)\ *\ \frac{cover\ period}{365.25}\ *\ cover\ amount$$

where the risk cost is calculated automatically based on the value staked against the protocol or custodian, in a way that the more value staked, the lower the annual cost of coverage. The surplus margin is a parameter introduced to enable costs (i.e. claim assessor and cover provider rewards) and generate protocol revenue. It is currently set at 30%. A strong assumption is made here, which is the basis for the whole pricing system: cover providers stake more money in protocols they consider safer and believe they will not have to pay out. From this, follows that pools with more value staked need to charge a smaller premium. However, the incentives for capital providers to stake in a certain pool are tightly associated with the APY they are expected to receive, which could cloud their judgment regarding risk assessment. Hence, the question is raised as to whether the value staked against a certain protocol is, when considered as the sole metric, sufficient for measuring risk.

The inputs for calculating the risk cost include: the net staked NXM, defined as the amount of NXM staked subtracted by 50% of the pending staking withdrawals, a maximum risk cost, which is set at 100%, a minimum risk cost set at 2%, and low risk cost limit, which is the amount of stake required to reach the low risk cost, set at 50,000 NXM. Given these inputs the risk cost is calculated as follows:

$$Risk\ cost\ = \ 1\ - \ (\frac{\text{net staked NXM}}{\text{low risk cost limit}})^{1/7}$$

subject to the risk cost being greater than or equal to the minimum risk cost (2%) and less than or equal to the maximum risk cost (100%).

It is important to notice that there are capacity limits on the amount of cover that is offered for specific risks, protecting the protocol from being too exposed to risks. There is a Specific Risk Limit that varies with the amount of staking on a particular risk and a Global Capacity based on the total resources of the mutual. The Specific Risk Limit is calculated as the capacity factor times the net staked NXM (defined above). These capacity factors can be updated by governance. At time of writing, capacity factors for all covered protocols are equal to 4. The Global Capacity Limit is calculated as 20% of the Minimum Capital Requirement (in ETH terms). A further explanation on how these values were derived could not be found.

Minimum Capital Requirement

The Minimum Capital Requirement (MCR) is an important component of the Nexus Mutual system, as it is used directly in the NXM price formula. It represents the minimum amount of funds the mutual needs to be very confident it can pay all claims and is calculated as follows:

$$MCR = max(MCR Floor, f(Cover Amount))$$

The idea behind this formula is that f(Cover Amount) determines the MCR, however, especially in the beginning, the mutual sets a MCR Floor value to ensure there is capital to enable cover growth. This was set at 12,000ETH at launch (May 2019), meaning that the protocol had to gather this amount of ETH before cover purchases were enabled for the first time. Despite this, the team decided to lower it to 7,000ETH one month later to be able to start selling cover earlier. A few months later governance voted on the implementation of a dynamic MCR Floor to better meet concentrated demand on a smaller number of systems. The incremental rates were tweaked until, in October 2020, it was decided to switch this increase off. Currently it is 162,424.73 ETH. In May 2021 the capital model floor value was decentralized and MCR calculations are now fully on-chain. Instead of the MCR being updated manually by the team, it takes the existing MCR value and moves it towards the target each time someone buys or sells NXM or has a successful claim. However, the actual MCR is smoothened to avoid large one-off shocks: it is restricted to move a maximum of 1% in any one update and a maximum of 5% per day. The capital model is currently implemented by assuming a fixed gearing factor applies to the active cover in ETH terms:

$$f(Cover Amount) = Active Cover in ETH / Gearing Factor$$

If the full Capital Model (off-chain) produces results that are very different, the gearing factor is updated via governance. The Gearing Factor is currently set at 4.8.

It is the capital model that determines the minimum amount of funds the mutual needs to hold. The MCR is set using methodology developed by the European Insurance and Occupational Pensions Authority (EIOPA). The two main considerations that make up the MCR are the Best Estimate Liability (BEL), which represents the expected loss on each individual cover, and a Buffer, which refers to the funds the pool would require to survive a black swan event. The BEL for each cover currently corresponds to the entire Risk Cost to get a more prudent estimation, but should later take into consideration the remaining duration of the cover.

The Smart Contract Cover Module is based on the exposure Nexus Mutual has to the covers it has written and is a component of the Buffer. It takes into account the total cover amounts for each individual protocol and custodian ($CA(i)$), the correlations between each pair of contracts ($Corr(i, j)$) and a scaling factor (SC) calibrated to make the capital result more comparable to a full Solvency II calculation. It is calculated as follows:

$$CR_{scc}=SC∗ \sqrt{\sum_{i,j}Corr(i,j)∗CA(i)∗CA(j)}$$

Nexus Mutual holds and invests a Capital Pool of assets in excess of the MCR to back its covers. The coverage ratio (abbreviated to MCR%) is the ratio between the Capital Pool and the MCR.

Like traditional insurance companies, Nexus Mutual can invest in DeFi protocols using a conservative investment strategy, such as staking ETH to generate PoS rewards or lending assets on decentralized collateralised protocols. Nexus Investment posts a proposal for an investment strategy on the forum, and after community discussion, the proposal is put to a vote.

However, when the Minimum Capital Requirement is reached, capital providers cannot withdraw their liquidity, which can be a drawdown and a reason for them to be more weary of providing capital in a protocol.

NXM Pricing and Tokenomics

The NXM token can only be purchased on the Nexus Mutual app, as it isn’t listed on exchanges. It uses a bonding curve (or continuous token model), meaning that tokens can be purchased at any time at variable prices. The price correlates with the amount of capital available to the mutual and the capital required to pay out all claims with a certain probability. The main driver of short-term price movement is the funding level, which encourages users to buy when funding levels are low. In the long term the capital required to support covers will rise, reflecting the adoption of the platform. The price (in ETH) is calculated as follows:

$$T P=A+\frac{M C R}{C} * M C R \%$$

where A and C are constant values that were calibrated at launch (A = 0.01028, C = 5, 800, 000).

These tokens can be used to purchase cover, participate in claims assessment, risk assessment and governance. The model encourages inflow of funds when required, raising capital as necessary. Since the MCR% is the ratio between Capital Pool and MCR, when the Capital Pool (which is the Mutual’s current funding level) decreases, e.g. because a claim was paid, so does the token price to recapitalize the fund. In the long term it is linked to the adoption of the protocol and not only speculation. Actually, NXM can only be redeemed for 2.5% below purchase price.

When cover is purchased, 90% of the NXM member tokens are burned and 10% are kept to be used as deposit when submitting claims or returned to the cover purchaser if no claim is made.

NXM represents ownership of Nexus Mutual’s Capital Pool. Only members of the mutual can buy and sell NXM in the bonding curve. To become a member, users need to complete a KYC process. There is more recently a version of the token that does not require KYC, wNXM. This can increase the total number of holders, but can also decrease the number of members. Members remain the only ones that can maintain price parity, taking advantage of arbitrage opportunities. wNXM is backed 1:1 with NXM, but as it is traded on exchanges, it is subject to market forces. However, since December 2020 the MCR% has been under 100%, which means that redemptions are impossible. While redemptions are not possible, the only way to sell NXM is to wrap it to wNXM and then sell it on the market. At the time of writing, wNXM is trading at one-third of the price of NXM. For someone to participate in Nexus Mutual, they have to buy NXM, so to avoid losing a lot of money when selling it, the only rational option is to buy wNXM on the market and unwrap it in the platform. Hence the bonding curve is effectively not being used at all. This was confirmed with the team. wNXM would only be pegged to NXM if MCR > 100%.

There are three sources of value accrual to NXM: cover premiums, redemption fees and investment earnings. When someone buys coverage, 50% of the premium goes to the Capital Pool without minting new NXM, benefiting all NXM holders through the increasing of MCR, which increases NXM price if MCR > 100%. 40% also goes to the Capital Pool, but the corresponding NXM is minted and distributed to stakers. Their stake is partially or totally burnt if there are valid claims on the contract they staked on. 10% is kept by the cover holder. The corresponding NXM is minted and locked so that half is burnt if they decide to submit a claim. If the claim is denied and they wish to re-submit it, the other half is burnt. If users buy coverage in NXM, 40% go to stakers directly as NXM and 50% accruing to the capital pool is burnt, so that there is less NXM in circulation, producing the same net effect.

When NXM is sold on the platform, a redemption fee of 2.5% goes into the Capital Pool in the form of ETH. However, as redemptions have not been available for a long period, this fee is also irrelevant.

It would be in the protocol’s best interest to keep MCR% above 100%. However, this hasn’t been able to happen since almost the beginning of the protocol, which raises the question of whether there should be other incentives in place to increase the amount deposited in the Capital Pool. Investment earnings would also go directly into the Capital Pool, so perhaps there is space for improvement there.

Nexus Tokenomics create a positive loop in which: more insurance policies bought means more demand for NXM and more revenue for cover providers, incentivizing more NXM staking; more Mutual Members means more demand for NXM; and a more decentralized mutual leads to more staked value in NXM for claim assessors.

The MCR determined by the Capital Model is calibrated to achieve a 99.5% probability of solvency over 1 year.

The Advisory Board is a central point in Nexus Mutual protocol and comprises only five members. It has too much power as it has access to an emergency pause function that stops all transactions, can burn claim assessors staked NXM if they find them fraudulent, and can influence the claim decision.

Adoption and TVL

Nexus Mutual's capital pool (TVL) grows whenever a new insurance policy is purchased, investment pools generate positive income, and NXM is purchased. However, the pool is affected whenever a payout is made, the Investment Fund incurs a loss, or NXM is burned. The Total Value Locked (TVL) of Nexus Mutual has grown from \$1.59 million at the start of 2020 to a peak of \$780 million on November 9 2021, an increase of 490x. However, since then, the broader crypto markets have descended into a bear market drawdown. Nexus Mutual is no exception, having experienced an approximate 76.5% drawdown to a TVL of $183 million in October 2022. The value locked in Nexus Mutual represents a negligible portion of the total unprotected value in the DeFi market, which showcases a massive and risky unprotected value.

When the crypto market is up and at ATH, DeFi protocols have a significant daily volume, are exposed to more risk, and protection demand may increase. However, if there is less demand for DeFi, there will be less demand for insurance coverage, resulting in less revenue for insurance providers. With less demand in the space, TVLs are also affected, and the lower the TVL, the lower the capacity limit to cover policies. During bear markets, when capital pools generate less revenue, cover providers have fewer reasons to invest their funds.

TVC

Nexus is the insurance protocol with the higher TVL, however it only accounts for a small portion of DeFi's TVC. During a bull market with plenty of liquidity in the markets, Nexus' TVC ATH represented less than 2% of the total DeFi market. These figures indicate a considerable growth possibility for the decentralized insurance market.

Revenue

Currently, the premium is fully paid when the policy is purchased, and it's a fixed-term amount that the cover buyer selects. When a user pays for the cover cost, 50% goes to stakers, 10% is held for the person's cover deposit, and 40% is kept in the capital pool.

These graphs depict similar behavior but on quite different scales. Nexus' cover price formula is based on the cover amount, cover duration, and risk amount. This explains this similar behavior, because there is a direct relationship between the cover amount and cover pricing. As the Total Value Locked in DeFi Covered rises, so will the Annualized Premiums In-Force.

The Active Cover Amount is always more than an order of magnitude higher than the Annualized Premiums In-Force scale. This is natural as users only pay a small percentage of the coverage requested.

A larger capital pool (TVL) allow more insurance policies to be sold and increased revenue for stakers and the Capital Pool. With V2, users can purchase a monthly policy and extend it as long as there is capacity.

Nexus Mutual started earning revenue one year after its launch, in May 2020, with just over \$2000 in monthly revenue. The monthly revenue peaked at \$3.16 million in February 2021, during the bull market, and averaged \$1.2 million per month during 2021. However, the past three months have seen protocol revenue experience a steep decline, averaging just over $210,000 monthly, due to market conditions.

This chart only considers the fees charged to Mutual members, not the investment earnings. We will investigate this later. This chart indicates Nexus’ monthly activity, such as the number of new members paying membership fees or the number of purchase cover policies since the value is paid in advance.

Tracking the growth and daily activity of Mutual members will be a key indicator of future economic activity on Nexus, as they are the only users who can buy coverage and generate revenue outside of investment income. In 2022, the number of unique addresses is still increasing, but at a slower rate, and this could be due to macroeconomic factors.

Membership fees and Cover Costs are the primary revenue for the Mutual, offset by claim payments. It is essential to note that Investment Earning returns can fluctuate based on the time period used and the market sentiment, with a massive negative amount currently appearing in the financials due to the current macro effects. Insurers are anticipated to generate greater revenue when more insurance policies are sold.

Final Thoughts

Nexus Mutual pioneered the Staker-as-Underwriter model, the most common DeFi insurance business model.

With this model, the underwriter (capital provider) controls the claims process, which creates a conflict of interest that enables legitimate claims to be denied. In addition, Nexus Mutual compels capital providers to speculate on risk instead of relying on data.

Token holders assume the inherent risk by providing capital in separate underwriting pools for covered protocols. However, this leads users to perform due dilling in each protocol when most capital providers seek higher APYs, which can impact the risk cost.

It performs well when no claims are submitted, but when cover providers want to withdraw their funds, this model begins to fall apart.

InsurAce

InsurAce was launched in November 2020 with a “0” Premium pricing (ultra-low premiums close to 0% powered by its dynamic pricing model), no-KYC wallet-based accessibility, cross-chain coverage and a first-of-its-kind portfolio-based design, which allowed users to cover a basket of protocols. It launched on Ethereum's mainnet in April 2021 and subsequently expanded to chains like BNB Chain, Polygon, and Avalanche, among others, granting users access to a multi-chain world.

InsurAce provides insurance cover, including smart contract vulnerability, stablecoin de-peg, IDO risk, and custodian risk with its unique portfolio-based coverage and customized bundled covers.

At launch, InsurAce provided two services, an insurance module and an investment module. To achieve its “ultra-low premiums”, the insurance allows users to place funds from the capital pool in the investment pool to gain a higher yield. Meanwhile, the investment module’s yield helps lower insurance premiums and reduce coverage costs for users.

InsurAce Participants

There are three types of roles in InsurAce: the Investor, the Insurer and the Insured.

The investment arm is still under development. The Insurer stakes ETH, DAI and other assets to an aggregated pool and earns an investment income, premium covers as well as INSUR rewards. In V1, insurers are exclusively rewarded with INSUR tokens; the plan is to share premium covers in V2.

The Insured purchases insurance products and earns INSUR rewards and claim rights.

Cover Pricing

The InsurAce Protocol team argues that a staking-driven price structure, like the one Nexus Mutual uses, fails to properly assess a protocol’s real risks, causing cover providers to charge too much for covers when fewer funds are staked. This led them to use a Dynamic Price model to determine premiums, introducing a minimum and maximum price. The premium is varied between these values, where the minimum price is a base premium, and the maximum is three times this base premium. The more cover sold, the higher the premium and vice-versa.

For each product, the premium for the first 65% of the total capacity will remain unchanged, equal to the base premium. The premium for the remaining will increase following the dynamic pricing model. The base premium is calculated by taking into account the aggregate loss distribution model and risk factors of the protocol. The aggregate loss distribution model is an actuarial model that combines frequency and severity (based on a number of claims and exposures in a given time period for a protocol), and it is used to calculate the expected loss at the portfolio level.

The main inputs are the number of claims and exposures in a given time period. These are used for selecting and training two separate models: the frequency model and the severity model. Frequency modeling produces a model that calibrates the probability of a given number of losses occurring during a specific period. Severity modeling produces the distribution of loss amounts and sets the level of deductible and limit of the coverage amount. Both models are combined to determine aggregate loss, which is incorporated into protocol risk factors, and so are calculations for the base price of each protocol formulated.

The models’ parameters are based on historical data, which can be difficult to find in the DeFi landscape. More often than not, when an exploit or hack happens that results in the need for insurance, it is the end of that particular protocol, so retrieved data would not be directly useful in the future. The collection of such data by machine learning algorithms seems like it could be used in an aggregate way if there are many data points available in the future, but it’s possibly dangerous to use when there is a small sample.

Pricing structure is not on-chain, which is common in DeFi insurance protocols, but clearly an important improvement point for the sector. While pricing is off-chain, users can’t understand why and when pricing changes, and it requires trusting the team, as there is the possibility of price manipulation.

Capital Model

InsurAce's capital model refers to EIOPA's Solvency II, the prudential regime for undertakings in the EU, in line with Nexus Mutual. There are different tiers of capital requirements under this regime, namely the Solvency Capital Requirement (SCR) and the Minimum Capital Requirement (MCR). While the first refers to the capital required to ensure the fund will be able to meet its obligations over the next 12 months with a probability of at least 99.5%, the MCR takes lighter restrictions and refers to the capital required to meet the obligations over the same period with a probability of at least 85%.

InsurAce uses SCR, as opposed to the MCR used by Nexus Mutual, as the capital standard to calculate the minimum amount of funds to reserve to potentially pay claims. It is calculated by taking into account all active covers, all the outstanding claims, the potential incurred but not reported claims, the market currency shock risk, the non-life premium and reserve, lapse and catastrophe risks, and the potential operational risk. The calculation of the SCR is performed daily off-chain. The team reviews and updates this information on-chain in the case that there is a noticeable difference.

The capital pool is built by funds pooled together by the mining pools, cover payments, and investment pool (all governed by INSUR token holders). In line with the MCR% used in Nexus Mutual, InsurAce uses the SCR%, which is the ratio of capital that it has available to support is SCR. It is also known as Capital to Risk Assets Ratio and it is calculated as the capital pool size divided by the SCR. The lowest acceptable ratio is 100%, which occurs when there are exactly enough funds to cover the SCR.

The Capital Efficiency Ratio (CER%) is used to measure the short-term success in deploying capital and corresponds to the ratio of output per amount of capital deployed. InsurAce calculates it as the active cover amount divided by the capital pool size. The desired ratio for InsurAce is between 100% and 300%, which is considered to signal high productivity and moderate risk exposure.

At time of writing, Nexus Mutual has an MCR% of 94%, while InsurAce presents an SCR% of 238%. Although the SCR and MCR are very similar metrics, where SCR seems to represent the strictest, i.e., safest, of both, it is worth noting that the way in which they are calculated can be different. Both InsurAce and Nexus Mutual run these calculations off-chain, so it is difficult to check whether the same standards are upheld.

Tokenomics

INSUR tokens are used as a representation of voting rights in governance votes such as claim assessment, as mining incentives for capital providers in both the mining pool and investment products, to earn fees generated by the protocol by InsurAce.io through governance participation, and for other ecosystem incentives. More use cases are expected to be introduced as the protocol develops.

Users who stake tokens in the platform earn INSUR token rewards. The InsurAce Protocol refers to this process as mining. Mining by staking in either InsurAce’s Cover or Investment arms is governed by the following equation:

$$Speed(Investment) + Speed(Cover) = C$$

where $C$ is determined by the token economy over time, ensuring a balance between the $Cover$ and $Investment$ arms.

For the capital pools in the Cover Arm, mining speed is determined by the InsurAce protocol's SCR ratio. When they are insufficient to meet the SCR, the mining speed for the Cover Arm increases to attract more capital, helping InsurAce lower its prices and reduce insolvency risks. The pool with less capital staked will have its SCR mining speed adjusted to attract more capital. This reverts back to normal once the SCR is met and the Investment Arm’s mining speed increases to attract more funds.

More formally, the Speed for pool i is determined as follows:

where $S_i$ is the number of tokens staked in a cover capital pool at time t, $S_{max}$ is the number of tokens staked in the largest pool at t − 1 whose mining speed is $S_{min}$ , and λ is the speed scale.

INSUR tokens can be bought on centralized and decentralized exchanges and bridged to and from any of the networks the protocol operates in.

While INSUR is a governance token and doesn’t have a direct utility, sell pressure is created which can lead to the decrease of token value. This can change if more uses cases are introduced.

Claim Assessment

The InsurAce Claims Process is similar to Nexus. A user may submit a claim within 30 days, and no later than 15 days after the coverage has expired. As soon as the claim is submitted, the Advisory Board initiates an investigation based on the proof of loss and other publicly available information, and shares a Claim Report with their findings and conclusion with the community. Once this is accomplished, there is a voting process that requires more than 75% of claim assessors (INSUR stakeholders) to be valid. In invalid voting processes, the advisory board evaluates the situation and makes its own decision. The user may contest rejected claims for 1% of the rejected claim amount but the Appeal is handled by the Advisory Board, which has sole authority to make a final determination.

This brings up the same issues mentioned previously when analyzing Nexus: the Advisory Board has too much power, is centralized in a small number of individuals, can influence claim assessors with their report, and there is a clear conflict of interest because stakers are the ones deciding whether or not to pay out a claim, despite the fact that they are the ones who will be penalized for the payment.

Adoption and TVL

The Total Value Locked of InsurAce has grown from \$14 million in June 2021 to a peak of \$55.8 million on April 29 2022, an increase of almost 4x. The TVL could have decreased between 7-13 May due to losses in UST or Anchor, but the InsurAce investment arm is still in development, and the team did not make any investments. Since the Terra collapse occurred in early May, this likely occurred due to the capital providers' fear of the impact on claim payouts. In May, the claims were submitted and approved, but the payments were only processed on June 11. Hence, LPs withdrew their funds to avoid being slashed by these payouts. However, they were then subject to a 15-day unlocking period, which exacerbated the negative impact on the TVL after June 11. The InsurAce TVL began a steep decline after that, falling from \$48 million to \$20 million within a week, and has been on a slow decline ever since.

Stakers were unable to withdraw funds from the pools while the InsurAce team assessed the value of accepted claims to determine whether there was sufficient capital in the pools. InsurAce attempted to persuade LPs to keep their funds in the pools by announcing a compensation plan for stakers who remained in the pools after all claim payouts were settled, but after locking the funds for an undisclosed period of time, that incentive was insufficient to keep capital in the pool.

InsurAce covers 140 protocols and has already paid out \$11.6 million in claims. From a total of 215 claim requests and 161 claim requests that were approved, 177 claim requests were submitted and 154 were approved in May. In June, the UST Depeg event caused a significant decline in InsurAce's TVL. The most amount of claims were paid out in June, totaling \$11.5M out of a total of \$11.6M. The chart indicates that payments were made in May, but the team is already working on a fix for this input error, as the payout date is currently the same as the claim data, despite the fact that the actual payment date was June 11.

Furthermore, the vast majority of these claim payouts were due to UST Depeg or UST Depeg-related bundle coverages, as shown in the chart presented above.

TVC

InsurAce is currently covering \$15.6M in assets, totalizing \$348M in total value covered since its launch.

The largest amount is currently covered on Binance Chain, while Ethereum is surprisingly in last place, with Polygon demonstrating the demand for L2 solutions. The protocol with the highest cover amount, totaling \$1.8M, is GMX, followed by Benqui with \$1M, and the majority have less than \$0.2M each. Ethereum being the chain with less covered amount may indicate that InsurAce is not as attractive when there are more insurance alternatives, as most other insurance protocols operate only on Ethereum.

InsurAce and UST Depeg

The InsurAce.io UST De-Peg cover was officially triggered on May 13, 2022, after a 10-day Time Weighted Average Price (TWAP) of UST below $0.88, as specified in their UST De-Peg Cover Wording. The cover amount was paid to those who held UST or any representation of UST supplied directly as liquidity in their wallets or accounts with any custodians at the time of the Cover's purchase and on May 13, 2022, and who held active UST De-peg Cover on May 13, 2022. InsurAce was overexposed to UST with roughly 21m of exposure. This event had a huge impact on the capital pool which lead to InsurAce protecting over 155 UST-related investors.

On 11 June, out of a total of \$12.2M in claim requests, \$11.5M were paid. The Terra collapse had a significant impact on InsurAce TVL and, consequently, SCR, but the team has been working on their risk modeling and capital efficiency models to recover from this occurrence. Reduced SCR entails reduced capacity for cover amounts, but the team has also severely constrained capacity compared to before the UST payouts.

Revenue

InsurAce’s goal is to generate revenue from the insurance premium and carries from the investment returns. Currently, since the investment arm is still in development, insurance premiums are the primary source of revenue.

The revenues are intended to be used in operation and development costs, token buybacks, community incentives, ecosystem collaborations, and more.

Unlike the case in some protocols that are analyzed below, once purchased, InsurAce coverage cannot be sold or modified.

The premium is paid in advance, but is only counted as "Earned" on a monthly basis, as some policies may be canceled prior to the expiration date, in which case the protocol will refund the remaining value to the user. The values referred to as "Earned" represent premiums, and the values referred to as "Received" represent revenues distributed over the duration of the policy, not taking into account cancellations of policies but counting the additional revenues from other sources, such as grants from chains. The revenue value was steadily increasing until the collapse of Terra forced InsurAce to pay nearly $12 million in claims.

Prior to the UST Depeg event, the "Received" amount was increasing as a result of new policy sales, and the "Earned" amount was also increasing as a result of new monthly payments.

After the event, the “Earned” amount was impacted because monthly claims payments to protect UST holders ceased; nevertheless, the protocol continues to earn premiums from long-term coverages. The “Received” amount was also impacted because, in general, people stopped purchasing coverage in InsurAce after the incident and were unable to do so due to the low SCR%.

Notably, the chart does not include operational costs, which include the amount spent on INSUR rewards for capital providers. The team reserved 45% of the total supply for mining rewards from the beginning, and the remaining time on that supply is likely two years. The team intends to divide revenue and profit sharing from the investment arm with capital providers in the future, but the percentage has not yet been made public.

Final Thoughts

InsurAce's underwriting model is based on a business model inspired by the DeFi summer liquidity incentives concept. In order to accelerate underwriting, InsurAce issued Mining incentives, which offer insurance providers APY paid in INSUR tokens. InsurAce APYs are based on supply and demand to incentivize capital providers to assist with token rebalancing in order to maintain an even distribution of underwriting tokens with sufficient capital for modeled payouts. This model provides a simple way to bootstrap liquidity quickly, but LPs who seek higher APYs will leave the pool as soon as they find a protocol with a higher APY.

Regarding cover pricing, it is interesting that InsurAce uses machine learning models to estimate parameters typically used in traditional insurance. However, the data that is available for the DeFi space seems to still be far from the necessary amount to employ these models.

The UST depeg event proved that insurance in DeFi serves its purpose and in the case of InsurAce, claimants were indeed reimbursed. This is a great step towards adoption, although it took a great toll on the SCR and the protocol seems to be having difficulty recovering from it. Having mechanisms in place to quickly recover from these situations or be protected from them is concluded to be very important.

Armor.Fi/Ease.org

Armor was introduced in January 2021 with the intention of solving fragmented liquidity and limited coverage capacity in the majority of protocols by extending the Nexus Mutual insurance model but removing the Know Your Customer (KYC) requirements using the arNXM vault. Despite successfully making Nexus' coverage products DeFi-compatible in 2022, the core Team felt that the premiums model was not optimal for DeFi. Armor introduced the Uninsurance (Reciprocally-Covered Assets - RCA Coverage) model and changed its name to Ease.org in May 2022.

The arNXM vault allows users to provide collateral to Nexus Mutual without a KYC check by acting as a custodian on their behalf. In addition, the Armor team actively monitors yield and risk factors and designs staking strategies accordingly. The yield-bearing nature of arNXM allows all rewards generated by underwriting Nexus protocols to be distributed directly to arNXM holders. This vault currently provides over 30% of all underwriting funds to Nexus Mutual but has provided 45% in the past.

Armor also introduced a new product in the DeFi space, arCore, based on a pay-as-you-go (PAYG) model, with duration and coverage limits that can be customized. This product offered a PAYG model by charging the insurance policy by block and offered customized duration by allowing users to purchase coverage from a pool of staked arNFTs that did not lock the funds into a fixed contract. Despite being an innovative insurance product, issues with gas costs on the Ethereum mainnet directly inhibited the flexibility that this solution sought to provide, as insurers with smaller wallets were charged block-level fees that were unaffordable. The protocol was discontinued at midnight on May 31, 2022 (UTC) along with the new rebranding strategy, which will be explained in greater detail later.

The arNFTs are yet another product created by the Armor team, and offer users a new way to interact with Nexus Mutual and their coverage policies. Users can mint arNFTs for any protocols for which Nexus Mutual coverage is available, and they will receive an ERC-721 token that they can hold, sell, transfer, or stake to receive fees in ETH and rewards in $ARMOR. The arNFTs will continue to be developed by the Ease team, with new features on the horizon, but they will no longer be able to be staked in the discontinued arCore product.

To meet the increased demand for coverage, Armor developed a second product called arShield, which streamlined and aggregated coverage via Shield Vaults, where users could deposit assets and receive passive coverage for as long as they remained in the vault. The premium cost was deducted from the asset yield, eliminating the need for upfront payments and lowering the premium cost. This concept gave rise to the shared risk ecosystem for which Ease protocol is known today. Since Ease is now live, the arShield vaults have been discontinued.

Reciprocally-Covered Assets (RCAs) were first introduced by the Ease team and are a DeFi-native coverage method in which covered assets simultaneously underwrite the other assets in the ecosystem. This new model enables users to store tokens in Uninsurance vaults with a one-time, vault-wide fee in the event of a hack. These premium-free Uninsurance vaults are possible due to the fact that RCAs are a method for collecting underwriting capital directly from deployed capital within DeFi yield strategies and deducting the premiums directly from the generated yield. In the event that one of the strategies is exploited, Ease liquidates a proportional amount of funds from all vaults to compensate investors. From there, future premium payments replenish the payout liquidation's capital. Since the cost is only incurred in the event of a hack and is spread across all participants, a larger number of participants results in a lower individual fee.

The benefit of this system is that the risk is distributed across the entire ecosystem, as opposed to being carried by a single vault or protocol, and that users are not required to pay premiums unless there is an exploit. Since risk is proportionally distributed among users, a larger hack will result in larger payouts to users, but will never lead to complete insolvency, resulting in a much more resilient coverage model. Additionally, the user's funds are never fully covered, as there is a capacity restriction on the vaults in order to maintain solvency. If 25% of the RCA ecosystem is hacked simultaneously, only 75% of the stolen vaults will be reimbursed, as impacted vaults will only be compensated an amount equal to the losses of other vaults. If the hacked value is greater than the total RCA value, the system fails (imagine that there is a hack on DeFi that affects a lot of protocols at the same time). The Ease team attempts to prevent this by not adding any protocols to the ecosystem, auditing protocols, and performing due diligence on protocols the team intends to add. With increasing protocol diversity, this type of system becomes more secure.

Claim Assessment

Armor’s claim assessment is identical to Nexus, but with Armor governance replacing the Nexus Claim Assessors in the first instance of a claim. In Ease, the DAO will have final say over the contents of each vault's coverage. In RCA's system, all losses are incurred directly from the vault of assets rather than from individuals, thus eliminating the need for proof-of-loss and claim procedures. The DAO votes on the amounts that must be returned to each affected vault and allows the liquidation of tokens from other vaults to complete the payout. Claims payouts will be made by sending affected vaults ETH or a stablecoin, after which users may withdraw payouts proportional to their vault holdings.

When an exploit event occurs, the protocols that are deemed safer will get slashed less, whereas the least secure protocols will be slashed the most. The safeness of each protocol is determined by the broad community itself through Ease token delegation.

The conflict of interests is the primary issue with this approach to stakeholders as insurers. Because the DAO votes on the amounts that must be returned to each affected vault and allows liquidation of tokens from other vaults to complete the payout, there is an incentive to accumulate votes to avoid getting slashed. Protocols with higher TVL in the vaults will contain more EASE and so the DAO will vote to return more funds to larger protocols. It is a smaller-to-larger protocols insurance, not an all-to-all insurance.

Risk Assessment

Technically, reciprocally-covered assets do not require a detailed risk assessment to function. Since no premiums are charged for coverage, Ease is able to cover protocols without a specific risk assessment, with the Armor DAO's initial approval or denial of the protocol following a rigorous investigation by the entire community serving as the figurative risk assessment.

It ultimately relies on the same premise as Nexus protocol, namely that the community is accountable for performing due diligence on projects and assessing their risk. Since the bulk of DeFi communities are made up of average users and not security experts, it would be imprudent to base the entire Ease protocol on the community's diligence.

Adoption and TVL

DeFi Llama's Armor and Ease metrics are ambiguous. DeFi Llama incorporated Nexus into its TVL for Armor. The team discovered this and contacted Defi Llama immediately, but they claimed to be unsure as to why it was occurring, and it was never fixed.

Since the Ease launch in May, the only viable way to track Armor metrics is through their Dune Analytics Dashboard, and based on the above chart, there are no longer any active covers in the protocol, since it was discontinued.

DeFi Llama's presented metrics for Ease TVL are also invalid due to the lack of their legacy product, arNXM, which DeFi Llama incorrectly counts as nearly $10 million for Armor TVL.

Since Ease was launched before there was a DeFi Llama metric page for it, the TVL has displayed 491k from day one. Ease does not offer any official or community data dashboards. The Ease team had the challenging task of launching a new and unique product during a bear market, which may explain why they are having some trouble attracting liquidity.

TVC

In Ease, users deposit tokens in vaults to cover and provide coverage to other users. We can say that Ease's TVL is equivalent to their TVC because all deposited funds are protected by other vaults. The issue with this strategy is that if all protocols or even the vaults with the majority of value are compromised simultaneously, the remaining vaults will not have sufficient funds to cover the defaulted vaults. This relies on the same assumption that Sherlock uses, namely that the probability that multiple high payout events occur within a short time span is very low. It would be interesting to see a deeper analysis of this assumption and understand under which conditions it falls through. The way DeFi operates in intertwined lego pieces that make up different protocols could pose a restriction to this assumption in the sense that exploits in particular protocols could cause losses in others.

Revenue

RCA products are currently not generating any revenue. Revenue from prior Armor products such as arNFT and arXM is currently enough to cover expenses. Ease.org does not currently charge any fees, but the DAO will have the ability to impose a maintenance fee based on a percentage of the yields created by users. This feature is not currently available. Ease is also working on Zapper integration, which will allow clients to zap assets such as ETH, USDC, and others into Ease's vaults rather than having to provide the exact underlying asset. This feature will be released from testing soon, and there will be a small fee associated with it.

Final Thoughts

Ease's value proposition is based on the assumption that, on average, hacking losses are significantly less costly than the premiums paid. We will be able to confirm this hypothesis once the project is tested using actual exploits.

With this RCA business model, if a hack occurs in one vault, instead of the user paying a contract premium, a small portion of the other vaults is liquidated to cover the loss, proportionally distributing it throughout the ecosystem. The largest, most secure, and most robust protocols, and users using these protocols have no incentive to participate in such a system because they are more likely to pay for hacks in other protocols using this vault-shared architecture than to be hacked and receive funds from other vaults. Even if the safest protocols are slashed less frequently, they will still be slashed multiple times while the other protocols are hacked. This risk diversification seems very beneficial for the system as a whole, as a large hack will never result in insolvency. However, proper risk diversification only happens if there are a lot of different protocols and participants being covered. One slight variation that could mitigate this would be to create different groups of vaults with different risk categories. Riskier protocols could be grouped to share the same risk, or individual users could then be better rewarded if they chose to provide the equivalent of their covered amount as cover for a riskier protocol.

Also, relying on community decisions assumes that token holders can conduct extensive due diligence at the smart contract level, which is beyond the knowledge of regular users. The safeness of each protocol is determined by the community through Ease token delegation, which could, in turn, be a point of failure if incentives are misaligned, i.e., if a large portion of voting power is gathered by a protocol or user that could benefit significantly from deeming a protocol safer than it truly is.

Finally, assets in the ecosystem are the collateral for the ecosystem, meaning that the available coverage increases as the ecosystem expands. Given that the risk is shared by all users and all vaults, users are not genuinely insured in the conventional sense. Rather, they do not lose all of their capital in the event of an exploit, only a portion.

Unslashed

Unslashed was launched on January 6, 2021, offering smart contract hacks, CeFi exchange hacks, stablecoin depegs, oracle failures, and allowing users to create Capital Pools identical to those of the previously described protocols, in which capital providers deposit ETH and their risk exposure are limited to a single insurance policy. Capital Buckets, structured insurance products that limit risk across numerous insurance policies, are also available.

Anyone may become a capital provider and provide risk coverage by allocating funds, which generates a return and provides insurance coverage for the ecosystem as a whole. The return comprises three streams: premium policies, the interest generated via Enzyme Finance, and the USF Capital Mining Program, which enables the protocol to reward early adopters and users of Unslashed with the governance token via the USF/ETH Uniswap pool.

Enzyme Finance is an asset management protocol that allows earning yield efficiently on the Capital Supplied and can help increase the available Buckets Capital, therefore, increasing the amount of provided coverage allowed.

Capital Suppliers receive premium payments live as they are directly streamed to them. They are not locked in a specific policy for any amount of time, as they can leave a pool or bucket whenever they desire and have access to liquidity to close the position.

Both capital providers and coverage seekers can trade their underlying tokens on external platforms, as both positions are tokenized as ERC-20 tokens, improving their composability with other DeFi protocols.

Capital Buckets

A Capital Bucket is a collection of properly designed, analyzed, priced, and assembled insurance policies for insurers to underwrite, diversifying their risk exposure.

The Spartan Bucket was the first structured capital bucket available on Unslashed. It protects users in six centralized exchanges (loss of funds policy), two wallets, eight DApps (Smart Contract Protection Policy), Chainlink oracle protection (oracle failure policy), Lido Finance protection (slashing protection policy), three custodians, and four peg loss-related protections. The DAO can increase the default maximum exposure by 5% per policy’s insurance capacity.

Cover Pricing

Unslashed has a pay as you go policy and users can stop the policy at any time, with payments being calculated live. Pricing depends on several factors. Besides a fair pricing methodology applied to each policy or policy type, Unslashed considers the correlations between policies that belong to the same Capital Buckets. The pricing also takes into account loss distributions as it is done in traditional actuarial pricing. The most recent policies include a supply and demand curve, allowing the premium to vary with the utilization ratio.

The team states they have on-boarded quants from traditional finance and managed to build and calibrate models that allow Unslashed to fairly price risk and structure insurance products. However, none of these models are public and as such they bear natural intrinsic risks, i.e. trust is required.

Other than the factors that are considered to calculate premiums, there is no information on how the calculation is done, how weights are assigned to each factor or whether this is a closed doors process evaluated by the team or accept input by governance. Considering that Unslashed uses a pay as you go model, this is most likely run off-chain. Another insurance protocol, Armor, implemented an on-chain pay as you go policy, but had this had to be discontinued as Ethereum fees rendered it unsustainable.

Risk Cost

The minimum capital required corresponds to the maximum available cover. This is calculated by a predefined formula that is not publicly disclosed. The design of the Capital Pools prevents withdrawing capital or getting more cover if the corresponding action would result in the maximum payout exceeding the maximum cover. Because the deposited Premium flows into the Capital Pool slowly over time, the Maximum Available Cover does not change, but space can free up to either withdraw some of the capital supplied or purchase additional coverage.

Unslashed considers that diversification across multiple smart contracts is not enough, as similar design patterns may lead to similar attack vectors. For this reason the team chose to diversify the Underwriters/Capital Suppliers risk across as many verticals as possible (smart contract risk, validator slashing, exchange hacks, etc).

No more information could be found on the determination of the minimum capital required, nor on the risk vectors integrated in cover pricing.

Tokenomics

USF is a governance token. Holders can vote on decisions regarding the direction of the protocol and updates to the protocol parameters. The team will initially manage the protocol parameters and gradually transition it to the Unslashed DAO.

Capital suppliers supply assets (e.g. ETH) to Individual Capital Pools and receive yield from the paid premiums. These premiums are paid by Cover Buyers in the same asset (ETH). When instead someone decides to deposit in Capital Buckets, they earn more types of yield: premiums, asset management yield and USF capital mining rewards. Since USF is being rewarded for supplying capital and has no further utility, sell pressure is created resulting in a constant decrease of the token value.

Claiming Assessment

A DAO-based claim assessment presents the challenge of choosing between the DAO's need to preserve the capital of their mutuals and their conflicting obligation to spend the same money to pay valid claim requests. Unslashed was one of the first decentralized insurance protocols to identify this issue and adopt Kleros to arbitrate claims in a fair, transparent, and efficient manner.

In the case of a claimable incident, a user may submit a claim for reimbursement under the terms of the policy. The claim request is followed by a time during which any user can contest the claim if they believe it violates the claim policy. If no one contests the claim, it is approved and the payment is made. If there is a dispute, a decentralized court case is launched in Kleros and Kleros jurors determine whether the claim is valid or not. A claim can only be contested once, although it can be appealed several times.

Before the UST Depeg event, only two claim requests were submitted to Unslashed. However, after the event, more than eighty claims were filed. Unslashed's largest claim to date, a 742 ETH loss event, caused by UST depeg, was rejected multiple times by the Kleros court arbitrating the case due to a 51% attack.

Adoption and TVL

Unslashed's Total Value Locked (TVL) began at \$130 million on 14 March 2021 and peaked at approximately \$169 million on 12 May 2021. Since then, Unslashed's TVL has been declining, reaching just under $23 million at the beginning of October.

At the time of Terra's collapse, Unslashed provided Stablecoins Depeg for UST users. As specified in their UST De-Peg Cover Wording, the Unslashed UST De-Peg coverage was available for claim requests after a 14-day Time Weighted Average Price (TWAP) of UST below $0.87. Unslashed paid more than 1000 ETH in June, and the payments were made in multiple batches; therefore, the chart does not depict a sudden decline in value, but rather a gradual decline throughout June.

A total of 102 claims were ever submitted, and a total of 1018.391ETH were saved as a result of 7 claim requests handled and refused on Kleros dispute, all of them were linked to UST Depeg.

TVC

Unslashed launched its product during a bull market fueled by the DeFi summer, which attracted a significant amount of capital and cover insurance since customers were able to pay an additional price to protect their assets. Close to 100 claims were submitted as a result of the UST depeg, and once paid or denied, they expired, reflecting the subsequent drop in active coverage. Similar to any other insurance protocol, Unslashed have struggled to return to their glory days following this catastrophe.

Revenue

There is currently no publicly available information regarding the Unslashed protocol's revenue stream or similar statistics.

Since there is no current information on the revenue stream, no conclusion can be drawn.

Final Thoughts

Unslashed seems to prioritize partnerships with DeFi protocols and protect them against some of their risks, instead of targeting users. Giving both capital providers and coverage seekers ERC-20 tokens that represent their position allows other protocols to build on top of Unslashed and potentially create added value. Another protocol could, for instance, issue risk-free tokens that combine a position and the corresponding insurance. Users can also speculate by for example selling their premium tokens at a higher price when there is lack of capital to offer more insurance.

NSure

After a Polygon beta, NSure launched on Ethereum in April 2021. NSure is conceptually similar to Nexus Mutual in that it has a capital pool of multiple accepted assets and a surplus pool that accrues capital through paid premiums. Unlike Nexus Mutual, however, it uses a Dynamic Price Model to determine premiums, which vary across products in the marketplace based on real-time supply and demand. This pricing model includes a Risk Parameter based on the rating assigned to each project by NSure. Their current business model does not necessarily require KYC.

NSure Participants

Cover Providers can stake NSure tokens against protocols or custodians to underwrite insurance and earn 50% of premiums. Another 40% of premiums go to the surplus pool, and 10% is kept locked until the end of the coverage to incentivise users to participate in the voting process if there is a claim request. The rewards are proportional to the amount of capital the cover provider has locked into the pool.

On the other hand, Claim Assessors are members who stake NSure tokens to evaluate claims submitted by other members and receive rewards for voting in conformity with the consensus.

Cover Pricing

Nsure employs a dynamic pricing model based on supply and demand to determine policy premiums.

The model employs the 95th percentile of a beta distribution (Beta(α, β)), and the shape parameters are capital demand and supply. The premium is also influenced by a risk factor that accounts for the project's level of security and a cost loading that accounts for claim settlement costs and other internal expenses.

$$\text{annual premium} = max[\text{95th percentile of Beta}(α, β), \text{min prem factor}] × \text{risk factor}$$

$$α= \text{outstanding policy limit (in USD)}  ×  demand \, scale \, factor$$

$$β= staking\, pool\, (in\, Nsure\, token)  ×  staking\, scale\, factor$$

$$policy\ premium\ = \ \frac{\text{policy duration}}{365} \times \text{annual premium } \times \ \lbrack 1\ + \ (\frac{365}{\text{policy duration }} - 1)*avg.\ claim\ cost\%\rbrack$$

The team recognizes that due to the lack of historical data on smart contract exploits, it is difficult to apply traditional actuarial pricing to Nsure products. They argue that for transparency sake, it is beneficial to use a supply and demand model that is easily verifiable.

Using a dynamic pricing model based on supply and demand means that if the capital supply is high, the premium rate will be lower; if the policy cover demand is high, the premium rate will rise. Premiums are susceptible to supply and demand forces; consequently, the weaker the supply and demand forces, the more variable the premiums. This means that the more insured value there is in DeFi and in particular in Nsure, the less sensitive premium pricing will be to demand and supply changes, which increases the robustness of the insurance landscape. However, in the case of Nsure, the less the price is driven by supply and demand, the more it would be influenced by a risk cost that is currently determined by the team in a non-transparent way, which could be problematic.

The risk factor should account for the riskiness embedded in each project. Without this factor the premium rate of two projects would be the same if their capital demand and supply were the same, which is not ideal. However, finding a decentralized way to assess this risk factor would be an improvement.

Risk Cost

Nsure developed the Nsure Smart Contract Overall Security Score (N-SCOSS), a 0 to 100 rating system for determining the risk cost for every project.

N-SCOSS is based on five major characteristics that, according to Nsure, make up the possibility for a protocol to suffer an exploit or bug in the code. These are the following: History and Team, Exposure (aka TVL, Industry Segment), Audit, Code Quality, and Developer Community. The team assigns a weight to each category and performs due diligence on each project by rating each category.

The formula used to calculate the N-SCOSS is as follows:

$$N − SCOSS = \sum_{i=1}^5 wi ⋅ Ni$$

$$N_i =\sum_{j=1}^{k_i} w_{i, j} ⋅ N_{i, j} (0 ≤ N_{i, j} ≤ 100)$$

where Ni (i = 1, ...5) are the five pillars of N-SCOSS and wi is the weight attributed to each. These pillars are further subdivided into several separately analyzed rating factors, symbolized by Ni, j. Weights are assigned to each pillar and each rating factor to quantify its relevance towards the code’s security.

To develop this system, factor groups that logically impact the code security were selected. Then historical hack events data were mapped to those selected rating factors, and the team analyzed whether they are correlated. The significantly correlated factors were included in the final calculation of N-SCOSS.

The pillar of History & Team considers the following sub-factors: project age, past exploits (if any), team anonymity and team experience in programming. The Exposure factor entails: total value locked, industry segment and infrastructure. The Audit factor is measured by audit transparency and scope, audit findings, audit firm trust score and other credits. Code Quality is assessed through documentation and testing. Finally, the Developer Community factor takes into account bug bounty programs and issues raised on Github.

The team points to some improvements that could be made to the system, such as introducing an adjustment variable to credit for strengthening or penalizing something that may not have been captured within the 5-pillar structure. Another future improvement mentioned by the team refers to the data sources. Nsure has been using data from sources such as SlowMist Hack Zone, DeBank and DefiPulse, but wants to set up an automatic data feed into the rating model via external data aggregation, minimizing manual interference. This could minimize centralized judgment and in the future make N-SCOSS an auto-generated indicator for users' reference. This concern to make Nsure risk assessment more transparent, unbiased and available for all is definitely a step in the right direction. Another potential improvement would be for new factors to be added through governance, as well as the corresponding weights.

Minimum Capital Requirement

Naturally the safest way for an insurance company to guarantee they can always pay out all the claims, would be to hold 100% cash against total obligations. However, the fact that the probability of occurrence of these events is low and the possible diversification of risk allows insurers to use the capital provided more efficiently. Nevertheless the primary concern of the insurance capital model, as seen also in Nexus Mutual and InsurAce, should be to calculate the capital required to guarantee solvency of the risk pool to a high confidence level like 99.5% in the EIOPA’s Solvency II framework. The Capital Model is used to determine the Minimum Capital Required (MCR), which is used in the minimal capital required to be locked in the Capital Pool and in the Staking Power Used in the Underwriting Module.

The Minimum Capital Requirement, i.e. the minimum amount of capital Nsure needs to have in order to guarantee payouts for all claims at a high confidence interval, it is calculated as follows:

$$MCR = \sqrt{\sum_{i, j} \operatorname{Corr}(i, j) * R F(i)^* E X(i)^* R F(j) * E X(j)}$$

where RF is the risk factor for product i and j, EX is the total exposure for product i and j, and Corr(i, j) is the correlation between product i and j.

Reflecting the correlated risks when considering the MCR is something not all insurance protocols do and it seems sensible. There are a few factors that could indicate the existence of correlation in terms of risk between projects in DeFi, for example: projects that result from forks or refer to existing projects’ code, similarity in structure as projects of same business type tends to be vulnerable to same hack method, projects that share oracles and naturally the lego structure of DeFi.

Tokenomics

NSURE token is a utility token used by Nsure participants and can only be used on Nsure Network. NSURE fuels platform operations such as voting on claims and governance-related functions. Additionally, the token is used for staking and signaling the perceived risk of the different platforms covered by Nsure.

NSURE tokens will be issued as incentive for capital providers participating in the Capital Pool with their assets. The rewarded NSURE can be used to stake on the insurance contracts, acting as underwriter within the platform, to provide further capital and share part of the premiums collected. 40% of all premiums are distributed between participants in the underwriting pools. This mechanism was expected to act as a natural balance, attracting new participants in order to match the demand, providing the needed capital and capacity to attract even more users. However, it is worth noting that rewarding underwriters with 40% of premiums is on the low end of what can be seen in other insurance protocols, where underwriters are rewarded with 50% or more. If liquidity incentives are not enough to outweigh the risks of underwriting, the total value locked in the capital pool can not be enough to cover claims. This can take a turn for the worse as insufficient capital in the capital pool disables withdrawals, which can in turn disencourage new deposits, making it difficult to move out of the situation.

Claim Assessment

The assessment is carried out through a decentralized decision-making process where 5 claim assessors, from those who have staked a sufficient number of tokens, are randomly assigned for each claim. This prevents people from abusing their power or manipulating the system. During the claim evaluation process, the staked tokens will be locked and destroyed if the assessor comes to a different conclusion about the claim than the majority. A challenge procedure and a subsequent public vote after a successful challenge contribute to the fairness of the claim evaluation procedure.

Each user can submit one first free claim on their policy. If the claim is declined and the user wants to file another claim on the same policy, they have to pay a fee worth 10% of the policy premium. After a claim is submitted, the 5 claim assessors are randomly chosen and to avoid potential conflicts of interest, the policy premium is unknown. Both the users and NSure holders can dispute the final decision. A disputed case with sufficient stakes will end in a public vote, the ultimate verdict for the claim, with no more disputes allowed.

Adoption and TVL

Adoption has been difficult for Nsure. Its TVL quickly peaked after launch, reaching a maximum TVL of roughly \$15m and currently sitting at around \$360k.

TVC

The protocol's active value is around \$50.8k, around 14% of current TVL. 92% of the active coverage (around \$46.8k) is to protect users against a Compound V2 exploit, while the remaining value is to protect users against a KeeperDAO exploit.

This data was gathered from the protocol's analytics website, however it is possible it needs updating. According to the analytics page, only two of the 27 available pools are being used to protect users. If the information is genuine, the Capital Efficiency Ratio of the protocol is quite poor, as only 50,8k, from a total of 360k TVL, are used to provide user coverage. It was not possible to further investigate this, as the team is not active on Discord and does not seem available to answer questions.

Revenue

The protocol is anticipated to generate \$1,600 a year in premiums. While Nsure offers a page with metrics, the revenue table appears to not be working properly at the time of writing of this report. It is also important to note that the last policy was purchased on December 17, 2021, suggesting that either the website charts need to be revised or there is a general lack of acceptance for NSure as an insurance provider in the DeFi space, which would explain its extremely low TVL.

Final Thoughts

Despite having a dynamic pricing mechanism that should have helped align supply and demand, it is obvious that Nsure has not been able to obtain a significant and steady market share. As liquidity incentives dried up, it is likely that market players' willingness to deposit have decreased, as the risks vastly outweigh the rewards in the form of inflated token payments. If there is insufficient capital to cover the claims, users' tokens could be locked in the capital pool indefinitely (or for an extended duration). Lastly, it is unknown whether the capital, price, and risk models are performed on-chain or off-chain, as well as the weighting of certain parameters.

However, the randomly selected claim assessors and the non-disclosure of the claim amount were excellent concepts for preventing the conflict of interest inherent in Stakers-as-Underwriters systems such as NSure.

Risk Harbor

Risk Harbor, launched in May 2021, defines itself as a risk management marketplace that protects liquidity providers and stakers from smart contract risks, hacks, and attacks via a fully automated, transparent, and unbiased invariant detection method. In other words, it offers parametric protection over on-chain verifiable metrics, thus excluding off-chain attack vectors such as frontend attacks. As implied by its name, parametric insurance establishes parameters that determine payouts based on specific metrics. Underwriters establish risk management pools with predetermined parameters, and users choose which pool to purchase coverage from.

Risk Harbor Core and Risk Harbor Ozone are its two major parts. The Core module is a native-EVM Risk Harbor compatible with chains such as Ethereum, Avalanche, and Arbitrum, among others. The Ozone module was created on Terra and operates on the Cosmos ecosystem.

One of the problems faced in insurance is the fragmentation of capital, where the underwriters need to actively manage their capital and select which protocols and products they’d like to underwrite. Risk Harbor Core attempts to tackle this by creating underwriting vaults where many protocols can be covered. The funds deposited in the pools are locked until expiration, which can be a barrier to attract capital.

Deposits in DeFi systems are frequently represented by claim tokens that are minted when deposits are made and burned when the underlying funds are withdrawn. Risk Harbor's automated claims evaluation method compares the redeemability of credit tokens with the protocol that issued them, analyzing important protocol-specific invariants.

Risk Harbor Participants

Underwriters supply capital to cover a potential user's loss in the event of a protocol vulnerability in exchange for upfront premiums and the compromised token in the event of a claim. Anyone can become an underwriter by supplying capital in one of the pools, if they are willing to assume the risk. When providing coverage, underwriters determine the Price Point at which they are willing to accept risk and deposit capital into the pool. They may remove their unutilized capital at any moment. If underwriters are unable to completely withdraw their position, it is because someone has purchased protection against it.

After deciding to withdraw their assets from the pool, underwriters must wait 12 hours due to the withdrawal cooldown that was implemented as a safeguard against MEVs and front-running. After the cooldown period, users have 12 hours to complete the withdrawal; otherwise, they must begin the process again.

Similarly, users who are willing to pay a premium can purchase a policy to protect themselves against vulnerabilities in DeFi protocols.

Cover pricing

The cover pricing is determined by the AMM that takes into account market conditions and protocol risk to calculate protection pricing automatically. When underwriters deposit funds to the pool, they pick a Price Point at which they are willing to assume risk. The Price Point is the proportion of the overall underwriting amount a potential user will pay in advance when buying protection from the protocol. These premiums would flow to the underwriters who had deposited funds at the chosen Price Point.

Users searching for coverage monitor the available pricing points and purchase at any Price Point with sufficient unused underwriting capital. If the consumer desires more coverage than the one available at a single price point, they can split their order across multiple price points.

The price depends on a variety of things. First among these are the assessed hazards of the protocol for which protection is being sold. Risk Harbor’s team decides how to weigh those hazards before feeding them to the AMM. The second factor to evaluate is the amount of outstanding protection that has been sold. Risk-averse, the protocol prefers to spread its liabilities. This means that if protection on a certain pool is in great demand, the AMM will propose a higher price for protection on that pool. This works in a similar way to dynamic pricing based on demand and supply, which is seen in various insurance protocols. Likewise, if the protocol feels it bears commitments that are connected with the protection you are attempting to purchase, the price will be higher because the protocol is risk-averse.

Risk aversion is a characteristic at the vault level that aids in AMM price protection. Higher risk aversion parameters indicate that protection costs increase more rapidly, whilst lower risk aversion parameters indicate that protection costs remain closer to actuarially reasonable rates.

A risk-on vault, for instance, indicates that the vault is not particularly risk-averse. Risk-on vaults are appropriate for underwriters with a high risk tolerance, such as large, diversified hedge funds and DeFi power users with powerful arms. A risk-off or conservative vault is preferable for underwriters with a reduced risk tolerance, such as DAOs and pension funds.

Risk cost

The risk model is one of the inputs of cover pricing. The risk cost is expected to follow the probability distribution of default occurences, informing the AMM of the likelihood of a default event occurring on each of the vault’s pools. The risk model also includes the correlation between different occurrences, as is the case for some insurance protocols like Nsure.

There is no information as to how these probability distributions are derived, nor whether this is done on-chain or off-chain.

Tokenomics

There is no Risk Harbor token (26 October 2022).

Claim assessment

Risk Harbor's claim assessment is reasonably easy and independent of community voting. The user confirms a claim token transfer, provides credit tokens (e.g. cUSDC) to the Underwriter Contract, and the code verifies the validity of the claim before sending the claim tokens to the underwriters and the payout from the underwriting funds to the user. Before assuming that a claim is legitimate, it waits at least one block (to prevent flashloan attacks) and then verifies its validity.

The automated claim evaluation procedure monitors the evolution of public system state variables directly on-chain to evaluate whether or not a claim should be paid out. These variables vary between protocols; hence, they must also vary between Policies. For example, the ETH in Compound Policy tracks the ratio of outstanding claim tokens (cETH) to USDC. However, the same would not make sense for a protocol covering USDC in AAVE, therefore the system would track distinct state variables.

Automated claims assessment is impartial, scalable, and faster than governance-based processes, however currently possible to achieve only for parametric insurance.

The UST Depeg

Compared to InsurAce and Unslashed, Risk Harbor's coverage protection for UST depeg events was superior. In InsurAce, customers were required to wait for the Time–Weighted Average Price (TWAP) 10-day average to fall below \$0.88, whereas Unslashed required users to wait for the TWAP 14-day average to go below \$0.87. In Risk Harbor, reimbursement occurred when the UST price on Chainlink fell below $0.95, allowing holders to automatically exchange their wrapped aUST for USDC.

The protocol worked as expected since an it was able to automatically detect the UST depeg and the claims were also automatically paid once there was unused liquidity in the pool.

Revenue

As of 26 October 2022, there are no fees incorporated in the protocol.

Adoption and TVL

Despite launching on EVM-compatible chains, its adoption has lagged. Out of the current \$14.5m in TVL, \$14m is in Terra2 and \$410k on Arbitrum.

Risk Harbor was fairly popular in the Cosmos ecosystem before the UST Depeg event. As can be seen in the chart, after the Terra/Luna collapse, the TVL took a big hit, mainly as Luna and Luna's native tokens spiraled out of control to ~$0.

The UST depeg vault on Risk Harbor had a coverage of $2.5m before its collapse. Therefore, as soon as the UST price went under \$0.95, policyholders were allowed to swap their distressed assets (UST) for USDC. Additional information about that can be found here.

It is important to note that only some UST pools on Risk Harbor covered stablecoin-depeg risk.

TVC

Risk Harbor doesn’t have an analytics dashboard yet.

There is currently no publicly available information regarding the Risk Harbor protocol's TVC or similar statistics.

Since there is no current information on the TVC, no conclusion can be drawn.

Final thoughts

Parametric insurances are a double edge sword. On one hand, they provide quick payouts over predefined parameters. On the other hand, they lack enough flexibility to be able to cover complex events or where a moral hazard exists.

Risk Harbor doesn’t fragment the liquidity of policy covers, rather liquidity is unified under a single pool. This is great at protocol level as new products/protocols can be covered without needing to bootstrap additional liquidity. However, this implies that liquidity providers need to fully trust the decisions taken by the protocol.

Risk Harbor implemented a pretty innovative automated claims assessment that allows impartial, scalable, and faster than governance-based processes.

The cover pricing mechanism is very innovative and could be an interesting new alternative. However, no information could be found regarding how the default occurrence probability needed for the risk cost is obtained nor how the risk cost is integrated into the cover pricing, so that a more in-depth analysis is not possible.

Bridge Mutual

With the stablecoin market cap just over $23 billion, Bridge Mutual announced its protocol in November 2020, and launched on July 9th 2021 with no-KYC, permissionless creation of coverage pools, portfolio-based insurance coverage, and underwrite policies with stablecoins in exchange for an attractive yield. In August 2021, fourteen days after the Popsicle Finance hack, it paid out its first claim.

In February 2022, Bridge Mutual released V2 with capital efficiency improvements, leveraged portfolios, which allow users to underwrite insurance for multiple projects simultaneously for those willing to assume higher risk for a higher APY, and Shield Mining, a novel feature that allows projects and individuals to contribute X tokens to the Project X Coverage Pool in order to increase the pool's APY and attract more liquidity. It also introduced the Capital Pool, an investment arm of Bridge Mutual that invests unused capital into third-party Defi protocols and generates revenue for the vault and token holders.

Covers

On Bridge Mutual anyone can create a coverage pool for any smart contract, exchange or listed service in exchange for yield. To do so a user just has to choose the appropriate network, enter the corresponding contract ID for the token of the project and deposit an initial amount of capital in USDT. Projects that are confident in their security can incentivize Coverage Providers by providing protocol tokens as additional rewards that get distributed to. This is known as Shield Mining. Shield Mining is a good way for projects to increase the amount of coverage available in their Coverage Pool.

Users who want to buy coverage, the Policy Holders, pay for coverage using USDT. This differs from other insurance protocols like Nexus Mutual, where all payments are in ETH and even NXM value is strongly influenced by ETH. The approach of Bridge Mutual seems more market neutral and can be less volatile in bad market conditions. It is, however, interesting that only USDT is accepted and not other stablecoins, like USDC.

Bridge Mutual also provides coverage for stablecoins as a different product within the platform. This protects against loss of value caused by a de-pegging event.

Pools

There are three types of pools in Bridge Mutual: The Coverage Pools, the Capital Pool, and the Reinsurance Pool. Both Capital and Reinsurance Pools are internal pools, which means that users cannot directly interact with them. Their goal is to enhance the protocol’s usability and capital efficiency.

For each covered project there is a corresponding Coverage Pool. Like described before, USDT must be deposited into the pool by its creator and the protocol can choose to provide additional incentives. USDT deposited in these pools is deposited into the Capital Pool, where it is used to earn passive income for BMI stakers and the protocol. The Capital Pool sends USDT to yield generation platforms with low risk. It is responsible for coverage liquidity withdrawals, policy payouts, and investments. It is rebalanced daily to guarantee operations and payouts.

The Reinsurance Pool is a protocol-owned vault that acts as an internal coverage provider to de-risk the protocol. It acts as a de-facto Leveraged Portfolio with key differences: it uses only protocol-owned funds, has a lower risk profile, and receives a lower APY from Coverage Pools (it receives APY comparable to those of a regular Coverage Provider, while at the same time being exposed to risk similar to those of a leveraged portfolio). The Reinsurance Pool accumulates the yield generated by the 3rd party protocols and re-introduces it to the Capital pool. It effectively increases the supply of cheaper coverage on selected pools and increases capital efficiency.

Tokenomics

Members stake USDT against protocols or custodians and get back bmixCover. Like in Nexus Mutual, a stake against a protocol is seen as a vote of confidence, showing that they think a protocol is secure. Stakers earn 80% of premiums paid, while the remaining 20% goes to the Reinsurance Pool as a protocol fee. This part of premiums that go to stakers is larger when compared to other insurance protocols that only give 50% of premiums to cover providers.

Coverage providers can also stake bmixCover in the staking contract pool in order to receive additional BMI rewards. They are issued a BMI NFT Bond that represents the amount of USDT staked. These are interest and risk bearing assets that represent the USDT deposited in a coverage pool. They are tradeable and can be sold on any NFT marketplace. This potentially adds value to cover providing, since the provided assets are not locked, but can still be used in a more capital efficient manner.

Users can also do what Bridge Mutual calls “Native BMI staking”. In this case a user stakes BMI in the BMI Staking Contract, and BMI rewards are compounded automatically onto the principle. When a user wants to withdraw these tokens from the contract, they must submit a request and wait 8 days. After these 8 days the user has 48 hours to withdraw their tokens. If after these 48 hours the user still wasn’t withdrawn, another unstake request must be submitted and the 8-day waiting period resets. As proof of their staking position the user receives stkBMI, which are in turn tradable tokens. Current native BMI staking is redistributing tokens at the rate of 1 BMI per block. The APY is naturally dependent on the total amount of BMI staked in the pool.

StkBMI can also be used to vote on claims by locking them in the voting contract.

Withdrawal periods are usually seen as a drawdown by users. However, voting with the majority also gives out rewards in terms of reputation (which in turn increases the next rewards), BMI tokens and USDT. Hence, if these rewards are meaningful then natively staking BMI is the only way to participate, which can make the withdrawal period seem negligible. This interconnects the value of the BMI token with the willingness to participate in the protocol.

Incentivization of capital provision doesn’t only come from BMI, but can also come from the protocols’ own tokens, through Shield Mining.

Investments

The Capital Pool only makes investments in the most well-known, tested, and liquid protocols. However, it naturally adds some risk to the protocol. Coverage providers do not directly get a share of the yield, but the yield is entirely deposited in the Reinsurance pool, therefore decreasing the risk exposure of coverage providers and reducing the price for police holders, effectively creating a win-win situation. Later, the DAO will be able to decide on other outcomes for this yield, such as BMI buybacks from exchanges.

Premium Pricing

Like InsurAce, Bridge Mutual uses a dynamic price model based on the utilization ratio, i.e., supply and demand of a cover. The considered variables are the utilization ratio of the pool, the duration of the cover, and the amount covered. As each of these go up, the price of coverage also goes up.

While both InsurAce and Bridge Mutual use dynamic pricing models, they differ in how they are implemented. InsurAce uses aggregate loss distribution models to calculate a base premium, which is the premium used while the utilization ratio is less than 65%, and then uses a dynamic pricing model. Bridge Mutual establishes a minimum (1,8%) and maximum (30%) premium. An utilization ratio above 85% is considered risky for the protocol and as such, the pricing of the premium increases more rapidly.

The risk cost for Bridge Mutual is the utilization ratio. A high utilization ratio implies that many users are willing to take insurance against the project, and few are ready to provide coverage, hence the project is considered risky. However, these pools charge higher premiums and hence have a higher APY, which can drive the utilization ratio down. There is directly no other evaluation of risk other than the Utilization ratio. However, the funds from the Reinsurance pool are used to decrease the price of coverage by padding the Utilization Ratio, using algorithms based on the pool’s risk profile determined by the DAO.

Minimum Capital Required

To ensure there is enough liquidity in a pool to pay all outstanding covers, coverage providers are forced to wait 4 days before withdrawing their USDT after a withdrawal request. They can only withdraw up to the amount that pushes the utilization ratio of the particular coverage pool to 100%. Withdrawals are also only possible when there are no active claims against it. This can potentially create a poor user experience for projects with small coverage pools.

Claim Assessment

For Stablecoin, the claims are automatically settled, without requiring voting. For the remaining claim, the Bridge Mutual Claim Assessment is a three-step procedure. The initial phase lasts seven days, during which users can vote to accept or reject a claim based on their own research and the evidence of loss. Voting is only considered valid if at least 10% of all staked stkBMIs participate in the voting process. In the second step, users must confirm their votes within seven days; those who fail to do so will incur a 100% penalty on their staked BMI position. Claims are only accepted if at least 66% vote in favor of acceptance; otherwise, they are rejected. The final step occurs two weeks later, and the user who submitted the claim has four days to disclose the result of the vote.

Every user's Reputation Score begins at 1.0 and can range between 0.1 and 3.0. Underwriters voting with the majority are rewarded, while those voting with the minority suffer reputation loss, and those voting with the extreme majority get slashed by 10%. The reputation score is calculated based on the stkBMI amount used for voting and is updated for each claim voted.

This process, like all the other Stakers as Insurers Insurance Models, represents a conflict of interest, requires community on-chain analysis, smart contract security, and exploit expertise that regular users lack, it's a super slow process, and in the end it does not even provide a means for a user to dispute the decision.

Adoption and TVL

The Total Value Locked of Bridge Mutual grew from \$12.6 million in November 2021 to a peak of \$18.7 million on December 4 2022, an increase by almost 50%. However, since then their TVL has experienced a 95% decline in TVL, dropping to just over $800,000 by the start of October. The V2 was released in February, and the huge decline and huge increase on the chart were due to a migration of the funds to the new contracts. No one was forced to unstake their funds, though.

Bridge Mutual was faced with the challenging task of launching a new version during a bear market, when liquidity in the pools is low and there has been a decline in TVL, as is the case for all protocols during a bear market.

It is intriguing to note that during the initial days of Bridge Mutual, Nexus's pool value decreased by over 18M TVL, which may indicate that Bridge Mutual has gained market share at the expense of Nexus. Due to the fact that the Nexus pool value is composed of ETH and not stable assets such as Bridge Union, it is difficult to draw conclusions regarding the cause of the decline, since it might just be due to ETH price volatility.

During the UST Depeg event, Bridge Mutual did not offer Stablecoins Depeg Insurance coverage. No user has had any policy bought that would reasonably cover any of the events at the time; hence no claims were made. However, it was offering Anchor insurance and it represented the second most purchased coverage pool on Bridge Mutual, accounting for 25% of all active coverage on Bridge Mutual. People withdrew money during this period out of fear of being slashed, and it lost a significant amount of TVL, from \$3.8M to \$1.3M.

TVC

There is currently no publicly available information regarding the Bridge Mutual protocol's TVC or similar statistics.

Since there is no current information on the TVC, no conclusion can be drawn.

Revenue

There is currently no publicly available information regarding the Bridge Mutual protocol's revenue stream or similar statistics.

Since there is no current information on the revenue stream, no conclusion can be drawn.

Final Thoughts

The Reinsurance Pool is an interesting feature of Bridge Mutual, which accumulates yield generated by investments in the Capital Pool and acts as an internal coverage provider. It de-risks the protocol and increases capital efficiency. However, it is advertised as not bringing any additional expense for regular coverage providers, although in other insurance protocols these investment returns would at least partly go to coverage providers. So effectively coverage providers pay the yield they don’t receive in exchange for extra safety.

The ability to trade and sell BMI NFT bonds increases the composability with other DeFi protocols, which increases the value proposition of providing coverage and increasing overall capital efficiency.

The potentially poor user experience that can come from a cover provider not being able to withdraw their capital from a protocol’s pool could perhaps be mitigated by an incentive structure with focus on small coverage pools. Without this concern for proper incentivization, it is difficult for users to take advantage of their ability to create new coverage pools for uncovered protocols.

Regarding risk assessment for premium pricing, there is directly no other evaluation of risk other than the Utilization ratio, which can not always be a correct measure of risk.

Bright Union

Bright Union is accelerated by Outlier Ventures and is often referred to as the "1inch for Insurance." It was introduced in September 2021 as a DeFi insurance aggregator that aggregates coverage from multiple markets, enabling users to compare, find the best option, and purchase coverage in one of the underlying trusted protocols without leaving the app. Bright Union currently offers coverage for and it is currently connected to Nexus Mutual, Solace, Unslashed, InsurAce, Ease, Bridge Mutual, among others. Bright Union only offers coverage and premium services to DAO members.

To address liquidity fragmentation, the Bright Union team is developing a Bright Risk Index, which they hope will become the industry standard for insurance solutions in DeFi. Bright Union's goal is to create a centralized point where investors can provide liquidity, which the team can then distribute across multiple protocols and insurance pools as needed.

The protocol also developed an SDK that enables third-party DeFi applications to easily integrate into the DeFi insurance world in order to provide these services to their users.

Claim Assessment

Bright Union does not assess claims; the insurance provider is responsible for this process.

Tokenomics

BRIGHT in the utility token of Bright Union. BRIGHT tokens allow holders to share in protocol revenue, as part of the sales proceeds will be used to buyback BRIGHT tokens from the market. Users who stake tokens can have voting power and membership access, which enables priority access to products and eligibility for Bright Union’s premium services (coming soon). Staked tokens are accumulating rewards while being locked in the protocol. There is a 7 day period to unstake tokens. The value proposition of BRIGHT tokens seems limited at the moment and sell pressure is expected, as it is not clear how voting power or membership access will be directly beneficial for the staker and BRIGHT doesn’t have a direct use, e.g. to buy cover.

Adoption and TVL

The Total Value Locked (TVL) of Bright Union started at \$76,000 on February 10 2022 and reached a peak of almost \$208,000 on the 6th of June 2022. Since then, Bright Union’s TVL has been on a steady decline, with approximately $112,000 in TVL at the start of October 2022. The TVL is related to the protocol's aggregator nature, as the protocol does not need to own the underlying assets to payout claims; only the insurance protocols do.

TVC

There is currently no publicly available information regarding the Bright Union protocol's TVC or similar statistics.

Since there is no current information on the TVC, no conclusion can be drawn.

Revenue

There is currently no publicly available information regarding the Bright Union protocol's revenue stream or similar statistics.

Since there is no current information on the revenue stream, no conclusion can be drawn.

Final Thoughts

The rapid increase in the number of parties offering these new, complex, decentralized insurance products presents an opportunity for a single platform to aggregate and match supply and demand. As an aggregator, Bright Union will be uniquely positioned to give less crypto-savvy individuals with more varied investment choices via structured products. There seems to be no activity on Discord and we were not able to get answers from the team, so a deeper analysis was not possible.

Sherlock

Sherlock was released in September 2021 and offers code audits in addition to coverage. The goal of Sherlock is not to protect users from protocol hacks, but rather to protect protocols from protocol hacks. With this approach, Sherlock can improve UX by eliminating the need for users to manage their own coverage for all of the DeFi protocols with which they interact; instead, users can simply use the DeFi protocol covered by Sherlock and they are automatically covered. Sherlock has a team of blockchain security engineers who provide code audits for protocols, and any smart contract reviewed as part of an audit is protected against hacking. In order for a protocol to be covered by Sherlock, it must first pass a code audit and effectively address all vulnerabilities. Protocols desiring coverage pay monthly premiums to Sherlock, and in exchange, Sherlock will use its staking pool to refund hacks up to $10 million at covered protocols. When a protocol's coverage expires, it has 7 days to submit claims for exploits that may have occurred while the coverage was still active. However, once a protocol's coverage expires, Sherlock is no longer liable for exploits that occur.

The pricing for code audits corresponds to an initial fixed payment based on nSLOC (number of solidity lines of code) and a prize pool to encourage audit contestants to compete. Moreover, if nSLOC exceeds 6000, this indicates technical complexity of codebases, so Sherlock has the final say on whether or not to include smart contracts in its audit of protocols. Usually, 50% of the audit cost is paid in advance to reserve the audit slot, and the remaining amount is paid at the end of the audit in order to receive the audit report.

Sherlock Participants

The Sherlock ecosystem is composed of three components: Watsons, Protocols, and Capital Providers.

Watsons are security experts who evaluate the protocol's risk based on in-depth fundamental analysis. Other DeFi Insurance protocols, such as Nexus Mutual, base their risk cost on the capital deposited in the corresponding protocol’s pool, meaning that the risk is lower when there is more capital in the pools, under the assumption that LPs conduct due diligence on the protocols prior to staking in the pool. This method requires that LPs have in-depth knowledge of smart contract security in order to assess risk, which regular DeFi users do not possess, and causes prices to fluctuate based on the demand for coverage, which can result in mispriced policies.

Protocols are the ones requiring protection against exploits.

Cover Providers deposit USDC into staking pools for a fixed term of either six or twelve months in exchange for the risk that up to 50% of their funds could be used to pay out for an exploit at a covered protocol. This staking position is represented by a NFT that can be redeemed once the lockup period expires to either unstake or restake the position. Cover Providers are rewarded by receiving premiums from protocol customers, interest earned from investment strategies like depositing stakers’ funds into yield strategies, and additional incentive rewards paid in SHER - Sherlock’s governance token. The amount of SHER distributed will be set by governance. Currently APY is at 14.5% and the team has informed us that at the moment 100% of all APY sources goes to capital providers right now.

If the LP decides to unstake his position, SHER rewards, the USDC principal, and staking rewards are sent to the NFT owner's wallet. A further nice feature for these NFTs would be the ability to sell staking positions on secondary markets, so that users' capital is always available and is not locked up for 6 or 12 months, as well as the capacity to integrate with other NFT-based DeFi protocols.

Cover premiums

The cover premium for each protocol that completes a public audit contest will equal 2% price based on their TVL and capped based on the maximum amount of coverage that Sherlock can offer ($10M). The cover premium for each protocol that completes a private audit contest will be 2.25% price. To ensure that a protocol does not overpay for coverage, the monthly premium is updated based on an off-chain script that manages the TVL being covered that month. A one-month upfront payment is required to activate coverage, but it is the protocol's responsibility to manage its payment methods using the Protocol Portal or by sending funds to Sherlock's multisignature wallet. Payments are made in USDC, and protocols are able to withdraw funds from their active balance as long as they maintain a minimum amount, which is currently 500 USDC. If the balance falls below that threshold, a bot will automatically, and for a fee, remove coverage for that protocol. There is always an amount equal to the last seven days of payment that the protocol cannot withdraw, so that Sherlock can respond if a protocol decides to cancel coverage.

Tokenomics

SHER is the governance token for the Sherlock protocol. Governance functions are planned to increase as the protocol matures. These will include the management of which Watsons are assigned to which protocols and other important parameters. Currently it is used as incentive for the stakers and protocols, as well as compensation to the security team. Without any utility for the token this causes a lot of sell pressure, so the value of the token is expected to decrease. This is not expected to improve as governance responsibilities grow and there seems to be no plans to attribute a utility to the token.

Claim Assessment

The claim assessment process is triggered when a protocol covered by Sherlock believes it has been exploited and submits proof information, such as the block range of the exploit and the amount to be reimbursed. Most DeFi insurance protocols rely on token holders to decide whether claims should be paid. Sherlock is utilizing UMA's Data Verification Mechanism (DVM) as the final step in determining claims payouts to reassure coverage purchasers that they have access to the decision of an impartial party regarding a claim. Claim assessment in Sherlock is a two-step process based on committee votes and UMA DVM. After a protocol submits a claim, the Sherlock Protocol Claims Committee (SPCC), which is composed of Sherlock core team members and security advisors, evaluates the nature of the potential exploit and maps it to the coverage terms agreed upon with that protocol to determine whether or not it will be approved. There is no economic incentive to incentivize payouts, so decisions based solely on parties associated with Sherlock are susceptible to bias. The second step allows the protocol to contest the SPCC's decision by staking a minimum dollar amount and escalates the claim to the UMA Optimistic Oracle for an impartial assessment. The DVM mechanism is a game-theoretic decision-making process among UMA token holders, who will use the information provided by the protocol, the claims committee, and security experts unaffiliated with Sherlock to determine whether the claim should be paid or not. The decision is still made by humans (UMA token holders), but outsourcing this step to an impartial third party reduces bias. In October 2021, this UMA integration went live on the mainnet, allowing for a decentralized, public, quick, and fair claim process. You can read more about UMA DVM here.

Adoption and TVL

Sherlock's \$30 million guarded launch was bootstrapped through a whitelisted round, pre-seed fundraise, ensuring liquidity from day one, and was relatively stable, with a \$30 million TVL remaining until March 7, 2022. This means that Sherlock did not rely on stealing market share from other DeFi insurance protocols to bootstrap their liquidity at launch. Since then, Sherlock’s TVL has dropped significantly to a low point of \$9.48 million on 29 March 2022, before slightly recovering to a range of \$20 to \$21 million in TVL from April to the start of October 2022. Staking is set to 6 or 12 month lock up periods, so that every 6/12 months capital providers can unlock or re-stake their deposits, hence the volatility in TVL seen in the chart.

Sherlock was launched in September 2021 but only started covering protocols in April 2022. Sherlock’s Total Value Covered (TVC) peaked at approximately \$34.9 million on the 25th of August 2022. Since then, Sherlock’s TVC has been relatively stable and is currently valued at $25 million, with a small decrease during this month. In general, the rule for the staking pool is that Sherlock cannot offer more than fifty percent of its TVL to a single protocol. The TVC decreased due to the fact that protocols were exceeding the 50% capital limit as the staking pool shrank.

Sherlock is currently covering six protocols, such as Squeeth by Opyn (\$7M), Euler (\$7M), Lyra (\$7M), LiquiFi (\$2.5M), Sentiment (\$500K), and Hook (\$250K). Squeeth by Opyn, Euler, and Lyra comprised more than 81% of the current TVC, and have less than 20 days of coverage remaining; therefore, the total value covered will experience a significant decline, as these are the three most valuable protocols covered by Sherlock.

Nexus Mutual and Sherlock launched Sherlock Excess Cover on October 20, 2022, providing Sherlock coverage for an additional 25% of their underlying coverage, for a total of 75% coverage. This collaboration will assist Sherlock in expanding the amount of coverage it can provide to each protocol in the future. The team is currently not able to cover $10M for each protocol with the current TVL, but expects to be able to do so again with this partnership and by working to add more TVL to the staking pool.

Revenue

The protocol will charge fees on the premiums paid by protocol teams, but not in the near future, as the protocol is backed by venture capital and the team believes they can focus on profitability once the protocol grows. Currently, the revenue is going directly to capital providers. Claims can have a negative impact on revenue and TVL, but the protocol had no claims as of today.

Since there is no revenue stream, no conclusion can be drawn.

Final Thoughts

Given that code audits require significant time, expertise, resources, and manpower, one of Sherlock's challenges was scalability, as Sherlock is only able to expand as more protocols are covered, which requires more code audits prior to providing that coverage. To combat this, Sherlock recently announced a new code audit contests initiative, through which code auditors can compete to provide audits to Sherlock for DApps (also known as Watsons) that they wish to underwrite.

Sherlock's theoretical foundation is based on the low probability that multiple maximum payout events will occur within a short time span and drain the capital pool, leaving protocols without coverage. An objective quantitative risk analysis could give more security to this foundation. If a large payout reduces the capital pool by 50%, there will still be sufficient capital in the pool to cover the same amount of coverage for another protocol. Even though they are aware that the likelihood of the capital pool being drained by other protocols is extremely low, Sherlock's clients still find the coverage valuable. While this skin-in-the-game approach reveals confidence in the audits done, in the eventuality of a large exploit occurring, Sherlock's entire value proposition may be put at risk. Sherlock's code audits could by proxy lack the same trustworthiness, which could cause stakeholder funds to be removed from the capital pool, lowering the TVL, and effectively diminishing Sherlock ability to cover more protocols in the future due to a lack of funds.

Solace

Solace launched on Ethereum in October 19th 2021 with an interface-first approach focusing on ease-of-use for users. Ever since, it has already launched on Aurora, Fantom and Polygon.

Solace Portfolio Coverage (SPC) allows users to insure all their DeFi positions across multiple protocols with a single coverage. The concept behind portfolio insurance is that by aggregating risk by protocol category rather than measuring risk for each protocol, Solace can diversify risk and the total premium to cover a wallet ends up being less expensive than purchasing cover for each portfolio position.

Even if a user's portfolio positions change, Solace monitors the changes and dynamically adjusts the risk rate for the portfolio coverage to prevent overpayments and complex policy administration. It provides cover against re-entry attacks, minting vulnerability, trojan fake tokens, flash loan attacks, math error, and proxy manipulation.

Solace is developed based on Protocol-Owned Liquidity (POL), a DeFi model directly influenced by the OlympusDAO model, aiming to separate the conflict of interest that currently exists in Stakers-As-Underwriters insurance-based model, like Nexus Mutual, during the claiming process. Using the POL Model, Solace acquires its own underwriting capital to increase capital loyalty and remove the underwriting risk from token holders.

The bonds program enables users to exchange assets for the SOLACE native token, which can be staked to earn rewards. Users can participate in underwriting by providing capital but without the risk of financial loss in the event of an exploit, and earn returns from policy sales and token emissions. Solace, unlike its competitors who leverage stakers' liquidity for policy sales, places the assets from the bond program in the Underwriting Pool to sell policies against. This pool is used to payout claims, and because the protocol manages the underwriting pool, stakers do not lose their locked $SOLACE if a hack occurs.

Cover Pricing

SPC uses a pay-as-you-go model that charges users based on the risk score of their portfolio. The premium can be calculated on a daily, weekly, or annual basis and is proportional to the risk and positions of the user's portfolio, ensuring that users do not overpay for insurance and only pay for the cover they really use.

Regular payments are an appealing feature for L2s because they provide near-zero gas fees. Users purchasing coverage on the mainnet should be prepared to experience Ethereum high fees once transaction volume increases again, so annual payments may make more sense in this case.

Risk Assessment

The protocols covered are limited to the list of protocols in Zapper's API since the Risk Rating Engine utilizes Zapper's API to obtain protocol information and a wallet's protocol positions.

Solace's risk cost is based on four risk levels. The fee for a position is proportional to its inherent risk.

Solace was initially relying on the professional judgment of its risk management team, but currently each protocol is evaluated based on an algorithm that utilizes data from the Zapper API relating to current hacks/exploits and public information on protocols. Solace calculates the Risk Rate for the User Portfolio based on the following data for each protocol: Total Value Locked, Blockchain Network, Number of Users, Transaction Activity, Time Since Launch and Number of Audits.

This data is currently retrieved from DeFi Llama, Defiyield, Rekt News and CryptoSec. Each attribute has its own weight coefficient in the estimation of the total risk. Currently, weights are determined by the team, but governance will take over as more reliable data is aggregated. The algorithm generates a score based on the information available on the protocol, but the risk management team can modify it if it does not agree with the output. This occurred with Aave V3, for instance, because the smart contracts were brand-new and the algorithm assigned it a high risk rating. It gives the team the ability to change the output score if it disagrees with its value, but it also introduces a centralization point that requires trust in the risk management team not to manipulate the result when it is convenient.

Nonetheless, in addition to evaluating each protocol, it is essential to comprehend the impact of DeFi category differences. DeFi projects may interact with each other, and hacking one project may have a significant impact on the others. Solace calculates the Inter-Category and Category Internal Correlation Tables based on statistical approaches that account for possible explicit and implicit risk connections between various DeFi categories (like lending, AMM, DEX, Derivative) and protocols.

The table presented above represents the Inter-Category relationships and is populated by experts based on their experience and research. The greater the value in this table, the greater the correlation between the categories.

Although this risk framework seems to present a transparent and thorough review of a portfolio's risk, there are some assumptions that will influence the rating heavily. The category in which a protocol is categorized in for instance, will have a big influence especially through the Inter-Category relationships. Albeit in many cases this is an obvious categorization, in other cases not. The fact that Inter-Category relationships are analyzed in such a broad way will naturally mean there is an averaging of the correlations. For example a lending protocol can globally have little to do with a AMMs (correlation of 0.1), but there may be two particular protocols in a portfolio belonging to these categories that have something crucial in common that influence each other, e.g., an LP token that is accepted as collateral is a pool in the AMM. There could perhaps be other tables such as this one that evaluate correlation in terms of other metrics that are not in the category they belong to. Another example would be protocols run by the same team, where a team member is revealed to be a bad actor.

To mitigate this, there is a Category Internal Correlation Table that has a similar output as the previous table but within the same category, and is also populated by experts within the Solace team. This does not cover the possible cross category correlations mentioned above, but it is definitely a step in the right direction. This table shows the probability that there could be a negative impact on product B if product A is hacked. Currently the team is attributing low correlation values to all product pairs. The team recognizes that this is an assumption and that this coefficient should be calculated by their rating engine.

The Solace team estimates that by aggregating risk loads by category, they are able to diversify the risk load so that the total premium ends up being cheaper at a discount of between 10 and 20%. Deriving these values is not trivial and a transparent calculation of this estimate would be interesting to see. However, it is feasible that the isolated risk calculations to arrive at each premium would have to be more conservative as there would be no other risks to balance out the need for a pay out.

The risk rates are not disclosed on-chain, but they can be accessed at https://risk-data.solace.fi/series. Each week, the risk management team updates the series data to reflect the most recent Zapper integrations.

Tokenomics

To pay out claims, Solace uses an underwriting pool, from which it will take money to cover a hacked protocol. Like described above, this pool is funded with SOLACE bonds from users who want to provide their assets in return for yield from staking. In general, the motivation for a user to purchase this bond by sending assets into the Underwriting pool would be to get SOLACE at a discount. In this case the user receives SOLACE at a 20% discount. However, SOLACE doesn’t have a practical utility at least for now, and so it will have sell pressure nevertheless. Buying at a discount is not particularly useful if the value of the token is expected to decrease as users sell their rewards.

Claim Assessment

An exploit is detected via a DAO vote to pay out insurers with a position that experienced a hack. Solace does not want the DAO to undertake the claim assessment because the team is aware of the inherent conflict of interest. It had intended to implement a Parametric Automated Claims Assessment System (PACLAS) that will quantify a loss event using on-chain data and invariants, but it is now transitioning to a Kleros-based claim assessment. The team will provide additional information on this topic in the coming months.

Adoption and TVL

TVL dropped dramatically, from \$4 million to values below $1 million. This sharp reduction in TVL was primarily due to DeFi Llama integration, since the team was asked to remove some asset sources. The TVL is composed solely by solace/usdc pool and staking. There are also macroeconomic conditions to consider, as April was a month in which a significant amount of liquidity was taken from the crypto economy.

Ethereum has the largest number of underwriting pools with 253K, followed by Aurora, Polygon, and Fantom. No claims were ever paid because no user ever experienced a hack on the covered protocols, so claim payouts had no negative impact on the TVL.

TVC

Currently, there are 875 active covers. The chart shows that most policies are purchased for protocols deployed on Polygon, followed by protocols on Ethereum. In the last 30 days, only two claims were sold, while seven claims were sold in the last 60 days. Solace is still building and improving its system by, for example, decentralizing its claim assessment to avoid conflicts of interest, so its growth is still extremely slow.

There is currently a safety mechanism to ensure that the total amount of coverage is always less than the underwriting Pool's capital in the Underwriting Pool to avoid insolvency. As the probability of all positions being exploited decreases with increased underwriting capital, Solace intends to modify this as it expands.

The current underwriting pool value is 312K, and the current Cover Limit is 310K. This is part of the security mechanism mentioned above. Thus, if the amount of coverage approaches the reserve's capacity, the protocol prohibits the sale of policies.

Revenue

Currently, the revenue from the underwriting activity flows mostly to staked SOLACE, with a small fee distributed to risk strategists, risk managers. The protocol takes 5% of all bonds to the DAO to pay back to contributors and core teams. Premium prices range from 2-8% of the investment per year. As Solace scales up the architecture, a small fee will be distributed to the DAO treasury.

As a staking incentive, Solace was previously rewarding 10M SOLACE per chain; however, the incentive has been changed to 10M SOLACE per year for all four chains. Since Solace is heavily dependent on the concept of SOLACE rewards to incentivize staking, this inflates the token supply without generating intrinsic value, and the Solace team must be careful not to spend more on rewards than the insurance policies are generating in revenue.

There is no public information on Revenue values, so no conclusion can be drawn.

Final Thoughts

The inflationary mechanisms of SOLACE present a disadvantage for this model. There is a growing consensus that staking alone is a poor design for a token model. It inflates the token supply because it does not generate intrinsic value, and if left unchecked, the token price may fall to compensate for the new supply. As investors in DeFi 2.0 may recall during the “(3,3) season”, this model was not particularly effective. Plans are already in place to increase utility by accepting SOLACE as a method of coverage payment.

However, the idea of using bonds to acquire protocol owned liquidity, effectively taking risk from users is very interesting. In terms of risk management for Solace, this has the great advantage of users not withdrawing value from the underwriting pool. The pool size doesn’t dynamically change when users deposit and withdraw, it is ever-growing unless there are claims to be paid out. Naturally the total value in the underwriting pool is still volatile, depending on the assets that are held by the protocol, but this makes it simpler to guarantee that there always are necessary funds to pay all obligations.

Steady State

As of October 2022, Steady State is not yet live. Currently in development, the Steady State protocol will be ruled by in-depth quantitative data analysis and complex risk modeling, delivered via automated smart contracts and supported by a governance DAO and a fully liquid secondary market. Using smart contracts to implement this solution will remove bias, increase efficiency and speed, and ensure immutable claims processing.

Coverage pools represent the insurance collateral for any given protocol or platform, allowing DeFi protocols and centralized finance (CeFi) platforms to tailor an insurance policy to their specific needs. Multiple protocols can join forces to create index pools in addition to the standard coverage pools. Index pools will provide greater collateralization and lower policy costs for protocols, while reducing the risk for capital providers.

Steady State hopes to automate and make transparent their claims process by integrating with Chainlink Automation, which enables the conditional execution of smart contract functions that evaluate transaction data, relevant addresses, and oracle price feeds to determine when a covered event has occurred.

The team has been developing the Risk Analysis Database (RAD) to preserve crypto data transparency standards and generate machine-learning-based rating for DeFi protocols. The primary function of the RAD is to collect information on DeFi attacks against protocols and will be available to all parties, including other DeFi insurance platforms. The collected data is segmented and partitioned across datasets that identify the type of risk event, the date, the USD value lost, the protocol type, and the duration of the protocol's operation. This data can be processed by machine learning algorithms to identify risk factors and generate more precise risk ratings. The same idea is behind InsurAce risk models. Their last announced collaboration will allow Steady State to explore Flourishing Capital’s proprietary AI technology in developing their own RAD.

The Steady State insurance product is not live, it has not even been deployed on testnet, and the results of their sophisticated and automated risk model have not been disclosed. The product attempts to address the current bottlenecks in decentralized insurance, but it is difficult to predict its success without seeing the market's reaction and with so little information available.

An opinion on the current DeFi Insurance Landscape

There are few insurance protocols in the DeFi ecosystem, and there needs to be more TVL locked insured to increase the secured value in DeFi.

DeFi valid claims are relatively rare but extremely severe in terms of value. According to Chainalysis, at least \$718 million had been stolen in October alone across 11 different hacks, bringing the annual value to over $3 billion across 125 hacks. This puts 2022 on track to set a record for the overall amount of value stolen in the crypto space.

It's ironic, but some insurance protocols were also hacked in the past, like the Cover protocol in December 2020. Cover experienced an exploit in one of their smart contracts that contained an infinite mint vulnerability, causing the total supply of tokens to grow by 48 quadrillion percent. The project chose to shut down almost a year later, in September 2021, because the TVL plunged after the attack, and the protocol never restored LPs' faith. TVL is critical for an insurance protocol because it determines the capacity limit to sell new cover policies. Thus, with limited TVL, protocols can hardly fulfill their value proposition and become useless.

At the time of the hack, Cover had \$45 million in TVL and was the second largest insurance protocol by TVL, following only Nexus, which had $100 million. At the time, insurers accounted for approximately 0.6% of the TVL in DeFi, highlighting the enormous possibility of securing digital assets.

As previously described, existing insurance protocols also fail to attract liquidity following the Terra collapse and the current macro situation.

Nexus launched in 2019 with a Stakers as Underwriter's business model, KYC requirements, and smart contract coverage on a single protocol. It is still the most significant player in terms of TVL. Following that, many protocols have attempted to innovate and address specific DeFi Insurance challenges, such as risk assessment, cover pricing, fragmented liquidity, asset management, and claim assessment.

The first approach to risk assessment was to associate risk with the value supplied by capital providers to each pool (each corresponding to a protocol). This idea assumes that more value staked represents fewer risks and relies on stakers conducting their due diligence before providing capital to the pools. This requires a level of security expertise and financial risk that most DeFi users lack. Bridge Mutual proposed a novel approach to determining the risk cost based on utilization ratios. A high utilization ratio indicates that many users are willing to purchase insurance for that project, but few are willing to provide coverage, implying that the project is risky. However, because these pools charge higher premiums and thus have a higher APY, the utilization ratio may fall, which makes this metric no longer reflect a perceived risk but rather a high-yield opportunity. Later, Ease proposed a different approach in which users can share risk among themselves at the cost of not being fully reimbursed during an exploit. In this approach, the protocol team performs due diligence on a protocol before adding a vault, representing a centralized action.

Risk assessment is extremely difficult to decentralize and should ideally become automated solely based on data. It is not easy to achieve this; Steady State is attempting to develop an algorithm, but the lack of information on-chain remains a barrier to training precise machine learning models to predict the correct risk cost per asset class. InsurAce also uses machine learning models to calculate traditional actuarial loss functions, but these calculations are kept off-chain and are not verifiable.

In terms of coverage pricing, Nexus began with a basic version of pricing coverage proportional to the risk cost for the protocol, the coverage amount, and the coverage duration. Pools with higher value staked must charge a lower premium because they are considered safer. However, the incentives for capital providers to invest in a specific pool are tightly linked to the APY they expect to receive, which may cloud their risk assessment judgment. As a result, the question arises as to whether the value staked against a specific protocol is sufficient for measuring risk when used as the sole metric. Later, InsurAce proposed dynamically pricing coverage based on supply and demand, using machine learning models to estimate parameters typically used in traditional insurance. However, available data seems very limited to employ these models. Later, Armor and Solace both implemented a pay-as-you-go model. Armor was receiving premium payments by block, but the team decided to discontinue this feature due to Ethereum's high fees for its users. On Solace, the user can choose their payment period - daily, monthly, or annually - but users who choose a shorter period will most likely face higher fees. Risk Harbor is taking a very innovative approach by defining the price based on an AMM model. However, no information could be found regarding how the default occurrence probability needed for the risk cost is obtained nor how the risk cost is integrated into the cover pricing, so it isn't easy to analyse if it's viable.

Cover pricing is an area for improvement in the DeFi insurance space as it would ideally be decentralized, automated, and capable of providing the appropriate cover premium based on entirely traceable on-chain information. Models like this face the same challenges that automated risk cost assessment does.

Nexus' single-protocol coverage insurance had fragmented liquidity and lacked capital efficiency. InsurAce quickly improved this by introducing portfolio-based coverage insurance, and Ease developed a mechanism allowing several vaults to share risk.

DeFi Insurance protocols, just like traditional insurance companies, develop investment strategies to manage their capital more efficiently, generate additional revenue to avoid insolvency, and reward capital providers with higher returns. However, asset management is a sensitive topic in DeFi because it is a double-edged sword: DAOs are not solely composed of asset management experts, and the community will not feel comfortable with a centralized investment approach. Most protocols have an investment arm that proposes investment strategies, but the DAO must approve the proposal. Furthermore, while investments bring clear benefits for the stakeholders, they also add risk, which is what users are trying to mitigate by purchasing insurance.

Finally, most DeFi Insurance protocols, such as Nexus, InsurAce, and Bridge Mutual, rely on a biased claim assessment process based on the idea that stakers should vote on whether or not to pay a claim. If a large event occurs in the underwriter model, the underwriters are incentivized to vote against policyholders because their profits are now at risk. This is an apparent conflict of interest situation, and Unslashed was the first protocol to decentralize the claim assessment process to Kleros, although a human must still submit the claim. Risk Harbor has implemented an automated claim evaluation procedure that monitors the evolution of public system state variables directly on-chain to determine if a claim should be paid out. The disadvantages of this method is that consumers must still make the claim, and the automation can only be implemented for parametric insurances in which all parameters are predefined. However, the process is impartial, scalable, and far quicker than governance-based assessments. With Ease the need for claims disappears through their creative use of reciprocally-covered assets.

An exploit detection should ideally be automatically triggered, and claim payout should be executed via smart contracts. Steady State is attempting to accomplish this by integrating with Chainlink Automation, but there is still little information available, and the protocol is still not operational. An exploit oracle could be a solution by serving as a source of truth for all DeFi protocols and users regarding whether or not an exploit occurred, which contracts were exploited, the assets affected, and the correspondent wallet addresses.

An insurance company should be able to remain healthy as long as it effectively prices risk, resulting in high premium revenue and low payouts. When this risk is not effectively measured, an insurance company may face significant insolvency risks if large payouts occur at the same time. Insurance is used to avoid insolvency in the event of a large exploit. However, the majority of the investment strategies that insurance protocols use are DeFi-based, exposing users to the same kind of protocol risk that they are supposedly shielding them from. Market volatility is another factor to consider in these investments. Nexus Mutual, for example, is currently losing money due to poor investment returns. The Terra Collapse had a significant impact on protocol TVL, with protocols fighting for their lives long after it happened. Ease is dealing with this by not fully reimbursing the user, which means that in the event of an attack, the likelihood of not receiving any funds is extremely low. Protocols developing investment strategies risk running out of funds to pay out claims to everyone, which means that some users may not receive any compensation. Better security mechanisms are required to ensure sufficient funds in insurance pools.

Nexus Mutual remains the industry leader in DeFi Insurance. Having been the first kid on the block in the decentralized insurance space, it still enjoys a first mover advantage. We highlighted that numerous obstacles must be addressed for an insurance protocol to bring innovation and succeed. Whichever decentralized insurance protocol earns market trust and market share by enabling scalable underwriting without fragmented liquidity, transparent and decentralized risk assessment and premium pricing, and continuous payout of valid claims will become the market leader in this sector. We are looking to help protocols achieve this goal, to ultimately help boost DeFi adoption. If you are working on any existing challenges in the DeFi Insurance space, we would like to hear from you.

Please get in touch with us at info@threesigma.xyz.