Not all utility tokens are created equal
Or why we disagree with John Pfeffer’s thesis
Last December John Pfeffer put forth a thoughtful analysis of the long-term viability of utility cryptoassets. Both in that paper and in a shorter, more recent contribution, he has declared himself increasingly skeptical about the long-term value of utility tokens to the point that few or no utility cryptoassets will be long-term viable at all.
In this article we aim to provide an additional perspective on the utility token subject, contesting some of Pfeffer’s assertions and offering a more nuanced conclusion:
All utility tokens are not created equal and thus treating them all as a single category sharing the same basic features greatly misses the point and is ill-advised.
Pfeffer has argued that economic agents will act towards utility tokens under one common theme: minimizing their holdings and resorting to acquiring them only the instant they need to use them. He has written that protocol-land will be frictionless, interoperable, forkable and open-source, so users will not need to tie up capital in stocks of utility protocols. The implication being that the network value of a utility protocol will converge on or near equilibrium, always representing a fraction of the actual cost of the computing resources consumed to maintain the networks, i.e. a cost only approach.
This takes us to the fundamental issue at stake: what is an adequate methodology to formulate a prognosis on the future value of a utility token? A lot of attention has been placed on the Quantity Theory of Money in recent times, and in particular on its fundamental identity: M = PQ/V.
However, we believe that an identity -something quite different from an equation- and in particular one as controversial in many respects as the QTM one, is not the ideal cornerstone to address the question posed. In our view, Pfeffer’s argumentation misses one fundamental issue: the expected evolution of the utility yielded by a single token to its targeted users.
For the sake of argument, let’s assume two different blockchain-based protocols. The first one, which we will refer to as FlatCoin, represents a system that has been designed to track bananas and mangoes from field to store. The example has been borrowed from Kai Stinchcombe. The second one will be named SlopeCoin. It works as a blockchain-based system too, but in this case its claimed utility represents a right to tap into some form of digital intelligence -not mere static knowledge at one point in time-.
Are the expected long-term values of the two alternative networks described, each with their native token, going to be explained solely by the assumed Velocity -V in the QTM identity above- of the two? We do not believe this will be the case.
So, what if any are the fundamental differences between theses two tokens? The chosen names certainly give us a clue. We assume that, all things equal -what economists like to refer to as ceteris paribus-, the value to a user of knowing the provenance of a given tropical fruit laying on a supermarket shelf will not be fundamentally different today from a year on. Or put differently, that FlatCoins will indeed not accrue substantial value over time.
But what about SlopeCoins? Well, we view the dynamics around this one quite differently. If users can employ SlopeCoins to query a system to, for example, unearth complex data patterns, there is ample room to believe that the quality of the response, produced by a set of continuously trained algorithms, might yield significantly higher quality results in the future. In such a system we believe there is a good basis for healthy cryptoeconomics that should accrue to token holders — and that without getting into the value attached to the governance rights incorporated into the new token ecosystem.
We do agree with John Pfeffer on many fronts, like when he predicts that for every successful utility protocol there will be n failed versions. We also share his view that the disruption of traditional networked businesses by decentralized protocol challengers will represent an enormous transfer of utility to users, the economy and society.
Where we disagree is in not thinking that all distributed applications and their future value generation should be evaluated using a bog-standard framework with little to know insight into the actual utility the new ecosystems aim to contribute to token users. Other authors have focused on ‘work’ (Kyle Samani, from MultiCoin Capital) and ‘skin-in-the-game’ (Ryan Selkis, aka TwoBitIdiot) as relevant categories of utility tokens. It is our belief that discerning investors will have an important role to play differentiating between the different types of utility tokens that the crypto community will undoubtedly experiment with, in an effort to develop more sophisticated future approaches to cryptoasset valuations.