Why you need modelling
Firstly, what is modelling?
In crypto, modelling is the action of running a project’s token economy through a particular software to simulate the flow of value, in particular to see the interplay between buy and sell pressure. This lets projects see what happens in the economy over time - how many tokens are on the market, where they accumulate, how much money users spend on what activities, what happens to the token price, and so on.
There are three main types of modelling used in Web3 - deterministic, stochastic, and agent-based, with the most common software for each being Excel, Machinations, and cadCAD, respectively. There are other tools out there but they’re not as popularly used.
Here’s a high level summary:
Deterministic - Excel: model outcomes given set input conditions. For example, seeing emissions of tokens over time, or calculating valuations based on predicted revenue.
Stochastic - Machinations: model randomness and recursive systems in aggregate. For example, seeing how much TVL a project can accrue on average given randomized user demand conditions that are a function of token price.
Agent-based - cadCAD: model randomness and recursive systems with the ability to model out specific agents and their impact. For example, seeing how users with a particularly risky leveraged trading approach affect liquidity.
Let’s dive a bit deeper.
Deterministic - Excel (<$10,000)
Similar to how traditional companies conduct financial modelling but with a focus on tokenomics and Web3 - emissions, valuations, PnLs, distributions. This lets projects get quick-fire data into the results of their tokenomics given a set of starting parameters.
Deterministic means the model will always yield the same results, with no “edge cases” or variations, based on set inputs. Useful for high-level overviews of tokenomics and summarising key data (like token supply at any given month).
Stochastic - Machinations ($10,000-$100,000+)
Initially designed for game economies, taken over by Web3. Architects can build out the full economy consisting of all actors and ecosystem fragments (like liquidity pools), encode the relationships between all of these, integrate randomness, and click “run” to literally see how value flows within the ecosystem in real time.
The main advantage of Machinations over Excel is the dynamic nature of the models that allows you to model randomness and recursion.
- Randomness is useful for modelling complex systems because we do not live in a binary world - we cannot assume users will spend $100 on every transaction; we need to provide a range with some sort of distribution, and the model can randomly choose any number in said range for each step.
- Running the randomised model 250 times allows us to see the averages of what would happen, but it also allows us to spot edge cases and figure out where they went wrong.
- For instance, if the average user spend is the minimum of $5 six months in a row, our token price might fall beyond a certain point making our protocol undercollateralised and liquidating every position. Or in other words, imploding. This may have been the scenario that played out on the 249th run.
- Recursion defines the concept of a function referencing itself as a part of its own execution. In a deterministic model, you cannot simulate recursion, whereby an output impacts the input which impacts the output again.
- A simple example of recursion is how token price affects the number of buyers: at $1 there may be 400 people buying, but at $100 there are only 4. This, subsequently, impacts the token price, affecting the number of buyers, impacting the token price, and so on. Recursion.
Whilst it may take millions of hours, randomness can be simulated manually with a deterministic system (by modelling out every single number combination within reason). Recursion, however, simply cannot.
Crucially, the purpose of this is to understand the following:
- where the vulnerabilities in a particular economy lie,
- what their chances of happening are,
- what causes them, and
- what are their effects.
This grants the project the ability to see into the future and design their economy accordingly, understanding risks associated with any decisions, and prepare strategies on how to best mitigate any problems if they occur.
This type of modelling is inherently far more useful for any sort of data-gathering, from understanding the chances of particular scenarios, to specific cause and effects of certain edge cases.
Agent-based - cadCAD ($50,000-$Ms)
Involves modelling out each user in an ecosystem allows for even deeper understanding of behaviours and potential outcomes by fragmenting actors into individuals, rather than an average.
See, stochastic models work in aggregate, meaning you model out the averages of each behaviours (albeit with preconditioned randomness) and get the same in the results. For example, inputting an average buy pressure of $500 per user ±$450, with a Gaussian distribution, gives us an average TVL of $5M in two years, ±$4M. We can see edge cases, such as if 6 months in a row the average is $50 per user, and we can see subsequent effects of that beyond the TVL in the model - such as the effect on token price.
Agent based models, however, will model out each individual user with their preconditioned behaviours (random or not), allowing for much deeper granularity and relationship building.
It is to some degree a bottom up approach, where you define inputs of individuals to conclude macro patterns, rather than defining the average macro patterns which in turn define inputs of individuals merely via randomness, to conclude macro patterns, like in stochastics.
I.e. you simulate 100 people, each with their own risk appetite and given USDC in their MetaMasks, rather than the average of 100 people with ±$500 USDC.
Similarly to the above, this modelling can be done manually with stochastic modelling, painstakingly creating every single individual in the model, but this, whilst not millions, will still take a lot of hours to do.
Secondly, why do projects need modelling?
Modelling is used to simulate your economy in a vacuum, to understand the flow of value and logic within the ecosystem itself, and to comprehend deeply how certain parts impact others.
Projects need modelling to:
- visualise cause and effect,
- stress-test assumptions,
- minimise risks,
- find equilibriums, and
- plan for black swan events.
Understanding how your economy behaves in total, what are the chances of different scenarios occurring, what are the causes and effects of certain conditions, how different actors cause butterfly effects and what the end results are, and everything else in-between, is very useful before launch because it allows you to ensure everything works as needed on a good day, and prepare for what to do if some edge case occurs on a bad day.
Once you launch, any problems are not only under time constraints to be fixed but are compounded by immense pressure from all of your token holders. Modelling out these scenarios and their solutions beforehand makes everything smoother to deal with.
For example, as showcased in the two charts below, for one client, we were testing different airdrop scenarios. We modelled multiple airdrops with different vestings to see what would happen to our client's token price, ceteris paribus. It was concluded that unlocking their entire airdrop on TGE was worse for mid-term price performance, based on their current user data and tokenomics.
Whilst it’s obvious that lower TGE unlocks will allow the price to pump higher (as it did in Chart A), what’s less obvious is what happens to the token afterwards.
In scenario A, the token has a 4 month vesting. In Scenario B, it unlocks entirely on TGE, but we countered the total emissions by adding a 6 month cliff to another tranche in the tokenomics with the same allocation (Chart B).
Evidently, the token still performs worse, partially due to not being able to amass enough USDC liquidity at a higher price point.
A great example of what modelling would’ve been great for is Time Wonderland ($TIME) (a massive ponzi), which had a very oversimplified Prisoner’s Dilemma approach to its model - (3,3) iykyk - as long as people didn’t sell, everyone would keep yielding massive 80,000% APYs. Being so certain of their strategy, the calculator on their website featured a “lambo counter”.
Alas, if they modelled out what would happen if even a small fraction of people sold, and how the decreased APY would cause a death spiral, they would’ve gotten a chart similar to the one below. This foresight would’ve allowed them to prevent the death of the relatively vibrant ecosystem they were building.
Yes, we know TIME Wonderland was a Ponzi, but the rest of the ecosystem seemed to have real users, which is rare for this market.
Whilst this specific example is quite esoteric, the exact same logic applies to the likes of Luna and UST, and other protocols that haven’t been stress-tested before being taken to market, causing severe events that wipe them out almost instantaneously.
Moreover, although certainly not as dramatic as the above, most charts in crypto, from the small caps to the behemoths like Filecoin, look very similar. In almost every single case, performance like this can be prevented by modelling out the economy, seeing the interplay between buy and sell pressure, and redesigning the systems until the economy is sound.
Note: do not model specific prices
A key thing to point out is that modelling shouldn’t be used to predict price over the long term. It is grossly inaccurate in the grand scheme of things. There are infinite factors to consider, and even the tiniest inaccurate assumption will exponentially grow until the entire model is incorrect very quickly.
- Factors range from the macro crypto environment, to geopolitics, to market maker strategies, to marketing campaigns, to user sentiment, to product. It is impossible to consider everything, or even come close to it.
The most expensive model with as many factors as possible considered is entirely wiped out by 1 Tweet from Elon about $BTC not being used to buy Tesla’s anymore.
Simply put, whether your token price is $4.36 or $7.43 in the third month according to the model is not as important as looking at what happens to your users, liquidity, lending pools, etc. if you change your staking APY from 6% to 12%. That increase in APY will affect both $4.36 and $7.43 in the same manner, and that effect is what the model is used to assess.
As in our example above, it is not the price itself that we are looking at, but the difference between the price in Chart A and the price in Chart B, showcasing clearly that Chart A has better price performance, regardless of what the price actually is.
It is not always obvious how the price, users, or any other factor in the economy will behave, and using modelling we can find not only the average outcomes but also the edge cases that might cause contagion and kill the project.
Do projects need to outsource this?
Nobody ever needs to outsource anything, but the opportunity cost of not doing so is time.
The time it takes to learn how to use these tools in the first place, let alone effectively and accurately, combined with the time spent debugging the models, is already in the hundreds to thousands of hours.
On the flip side, hiring someone to do this in-house proves better than learning it yourself, but running these models typically isn’t a full-time engagement like sales or development.
On top of that, these models are only as good as their assumptions, and these assumptions include a lot of industry and economics, mathematics, and sometimes coding knowledge. Whilst a project can gather all of the skills and information required to create a successful model, the value-add of the broad and deep knowledge of an agency that modelled economies for a multitude of projects is unquantifiable.
Modelling may cost $10,000+, but it can save your project from catastrophic failure worth 10,000x $10,000.
We are not obtuse, and understand there is inherent bias in the above words written by a tokenomics consultancy that offers modelling. But we know this is no different to choosing to hire a professional engineer to build your dream home instead of going to university for 4 years to study engineering, slapping your thigh, and saying, “Right then, let’s get started.”
Enhance Your Tokenomics with Simplicity Group
At Simplicity Group, we specialise in expert tokenomics consultancy, offering tailored solutions for your project's unique needs. Whether you're in the early stages of tokenomics design or looking to optimise your existing token economy, our team of experienced consultants is here to guide you.
We understand the complexities of balancing investor interests, creating sustainable token ecosystems, and navigating fundraising tranches. With a proven track record of helping over 50 projects, our consultancy services simplify and strengthen your tokenomics strategy for long-term success.
Explore how we can assist you by visiting our website, checking out our blog, or connecting with us on Twitter and Telegram.