How token economies organize people around endeavours

When you think about it, Bitcoin is incredible. It got many people to pour lots of money into running a public goods system, a very important system that we all need – a monetary system.

Sure, there have been monetary systems before. But which one is valuable and easy to transport at the same time, yet easy to verify its authenticity? Not gold. Paper is supposed to be backed by actual reserves of (whatever’s valuable, usually gold), but who trusts the people running the reserve? Plus they all encourage you to spend (inflation), not save (deflation).

But we’re not here to debate monetary systems. We’re here to generalize the Bitcoin achievement into other subjects.

A Very Quick Overview of the Bitcoin Concept

The best way to make a system uncensorable (countries hate it when you issue your own currency, because it makes them irrelevant – just see Wörgl in Austria) is to spread the system across many computers, just like Bittorent.

But how do we coordinate many computers and get them to share the same state? Bitcoin says “might makes right”, or “whoever has the most computing power is correct”. Specifically, “whoever can take a bunch (block) of valid transactions and create a hash (fingerprint) that starts with n zeros in front is correct”, where n is adjusted for difficulty periodically.

Since you have to spend a lot of electricity and computational power to find such an answer, you probably aren’t trying to sabotage the Bitcoin system. Congratulations, the algorithm will reward you with some Bitcoin (12.5 BTC as of time of writing).

Basically, if you you do honest work (Proof of Work) for the system, the system will reward you with Bitcoin. Now how can we use this to reward other kinds of work?

Decentralized Computation/AWS Lambda: Ethereum

Proof of Work: Ethereum works just like Bitcoin. The only difference is while Bitcoin tokens have no use on Bitcoin other than transferring them around, Ether is actually useful. A developer can upload programs and other people who want to use them pay for the program to run with Ether.

Or they could just send them around like Bitcoin.

Decentralized CDN/cloud storage: Siacoin, Filecoin

Siacoin: Proof of Storage + Proof of Work: A user uses SIA to pay for a file he wants stored on the Siacoin network. This SIA is set aside in escrow. To earn the SIA, storage hosts regularly submit a random segment of the original file and a list of hashes of the other segments to prove that they’re still storing the file. Regular Bitcoin-style Proof of Work ensures that everybody in the Siacoin network agrees that these proofs are valid.

Filecoin: Proof of Storage + Byzantine “Expected Consensus”: Filecoin has a very different design – nevertheless, the idea is still the same if you zoom out enough. Storage hosts must stake FILC token as collateral in case they behave dishonestly. Every 24 hours, every storage host must submit a proof that they stored the data over a period of time. If they miss this deadline, their staked FILC is slashed and they won’t earn extra FILC for that round. Randomly chosen storage hosts validate each other and produce the chain in Filecoin’s Proof of Work replacement, called Expected Consensus.

Observe: the system wants to reward you for honest work, and wants to punish you for dishonest work.

It’s just that sometimes, it’s difficult to tell a computer what “honest work” exactly is.

Decentralized Video Transcoding Service: Livepeer

Proof of Stake + proof of correctly done work: since it is difficult and expensive to verify that a video was transcoded properly, Livepeer makes it difficult to be a Transcoder. To earn the right to transcode video on Livepeer, you put lots of LPT tokens on escrow (you can ask other people to contribute to this stake, i.e. delegate their tokens to you) and if you’re in the top n, the network trusts you enough to send your computer some transcoding work. Livepeer sends random segments of the video you encoded to a third party service on Ethereum called Truebit.

I don’t quite understand just how Truebit verifies the transcoded video without actually doing the same computation, though.

If Truebit finds that a Transcoder didn’t transcode anything, Livepeer will slash the Transcoder’s staked LPT tokens. To further increase peer pressure on the Transcoder to be honest, delegated tokens are also slashed.

Decentralized Spotify: Audius

What does one need to run a streaming music service? (let’s read their whitepaper) Audius defines a content service, which hosts the actual music, encrypted of course; and discovery service that keeps track of what content is out there.

Obviously if you run the content service, you could pirate the music that artists uploaded to your server. And obviously if you run the discovery service, you might try to game the system by unfairly promoting certain artists. That’s why you have to stake (put in escrow) at least 200,000 AUDIO for the network to recognize you as a content host, and the same for the discovery service. Artists can stake their own AUDIO tokens on you if they trust you not to pirate their music, or run their own services. If you misbehave, your tokens get slashed, just like in Livepeer.

However, a blockchain can’t tell if music was pirated, or if an artist was unfairly promoted. Audius therefore leaves these behaviour checking mechanisms to the community to call a vote on. This pattern is obviously not as reliable as having a computer check things, but we will definitely see more of such patterns in the future as we apply token economics to new use cases which computers cannot completely evaluate.

A new way to organize humans around endeavours

As you can see, having your own economy enables you to do what a company does – except that while a company motivates you with salaries in a national currency, in these new organizations you’re motivated by earning their own token.

And while in companies, humans approve your salary, in these new organizations the computers give out rewards – humans are only needed to verify that honest work was done.

You can’t pay for rent and food with these tokens, but it’s not like they’re useless either.

In fact, a group of people with their own economy can be thought of as its own living unit, its own organism. Isn’t that what a country is?

Published
Categorized as Expound

Design Rationale behind the Commons Stack cadCAD refactor

Introduction

Since March 2020 I’ve been working on the Commons Stack’s cadCAD simulation of the Commons, refactoring and redesigning it to become a part functional, part object oriented cadCAD simulation. This post documents the design rationale and thoughts I had in mind for the refactor, and establishes the desired design direction for the future.

What does this simulation simulate

The Commons (longer explanation here)is a group of Participants who can create and vote on Proposals, which receive funding and either fail or succeed. The Participants have vesting/nonvesting tokens depending on when they entered the simulation. The bonding curve controls the price, supply and the funding pool/collateral pool of the Commons. Participants have an individual sentiment, which can affect if they decide to stay in the Commons or exit completely, and if new Participants decide to join.

The original code was written by Blockscience’s resident math genius, Michael Zargham, where the relevant code was in conviction_system_logic3.py, conviction_helpers.py and conviction_cadCAD3.ipynb. I added explanations in my own fork, and started refactoring in coodcad. Later on this was merged with the Commons Stack game server backend repo in commons-simulator.

Participants and Proposals are represented as nodes in a structure depicted below, and the lines (edges) describe their relationships to each other. The general name for this type of data structure is a Directed Graph, where a Graph is basically a collection of nodes, and the “directed” just means that the edges describe a relationship “from this node to that node”. Alternately I just call it the “network”.

In this graph (not a Graph) only the Participants are depicted, with the lines representing their influence on each other.

Just some problems I had with the original code

The code badly needed “models”, like what we have in web server backend programming, and a layer that abstracted away the details of modifying the directed graph.

Or, in other words, we needed a concept of a “thing” instead of a collection of attributes, and the creation of a “thing” should be separated from “how to add that thing to the network” (which wasn’t happening).

Because everything used dictionary attributes, it was impossible for the linter to know anything about the types and thus point out programming errors.

def gen_new_participant(network, new_participant_holdings):
    i = len([node for node in network.nodes])

    network.add_node(i)
    network.nodes[i]['type']="participant"

    s_rv = np.random.rand()
    network.nodes[i]['sentiment'] = s_rv
    network.nodes[i]['holdings']=new_participant_holdings

    for j in get_nodes_by_type(network, 'proposal'):
        network.add_edge(i, j)

        rv = np.random.rand()
        a_rv = 1-4*(1-rv)*rv #polarized distribution
        network.edges[(i, j)]['affinity'] = a_rv
        network.edges[(i,j)]['tokens'] = a_rv*network.nodes[i]['holdings']
        network.edges[(i, j)]['conviction'] = 0
        network.edges[(i,j)]['type'] = 'support'

    return network

As you can see, the attributes that make up a Participant are defined ad-hoc. This part of the code may know that a Participant is supposed to have ‘sentiment’ and ‘holdings’ but what about every other function that deals with Participant? If you decide to add a new attribute to a Participant, you’d have to update the code everywhere else and you don’t know if you did everything correctly (there are no tests), and the linter can’t help you. This is a good case for having a class or at least a struct, to be used as a model/”thing”.

The edges have the same problem, but I didn’t find it worthwhile to change that into a class so I just left them as dict attributes.

In the for loop, you can see that it is simply setting up the relationships between the newly added Participant and the Proposal. This is merely tangentially related to the original task of adding a Participant to the graph, and if one were to make changes to the Participant-Proposal relationship, one would not remember to update this function. This task of ensuring the new Participant has a relationship to every Proposal is best handled by a separate function.

The mechanisms that affect sentiment were spread between several policy and state update functions – it was impossible to keep track of when this variable was touched.

sentiment (the same word was used for overall sentiment as well as individual Participants’ sentiment) is mentioned in gen_new_participant, driving_process, complete_proposal, update_sentiment_on_completion, update_proposals, update_sentiment_on_release, participants_decisions, initialze_network. Notice how the function names imply that sentiment simulation is highly integrated into the simulation and not something you can just “turn on/off”. Furthermore you can’t tell by the name of the function whether it uses sentiment or not. In the long term this is unmaintainable.

Upon trying to add an exit tribute (or anything) to the code, it was not easy to see where it should be added, and if adding it would break something else. This was a natural consequence of all these little problems building up.

To test if the code worked, I had to run the entire simulation. There were no unit tests, no function could run independently of cadCAD.

This also made it impossible to know if everything was working as intended, or if some happy coincidences happened (because Participants’ actions are random) that didn’t crash the simulation this time – or worse, continues running but does something completely unintended.

Even if the simulation worked, I wasn’t confident it was doing what was intended. Yet another natural consequence. In fact I found some things (which I’ve long since forgotten) that weren’t actually used, or working as expected.

Why is the new code designed the way it is?

As you could see already, Participant and Proposal needed to become “objects/things” as opposed to a random collection of attributes (that may not be manipulated consistently by the code). This helps pylint check that you’re manipulating the attributes correctly. Let the computer check for programming errors as much as possible with the linter so that I don’t have to run the simulation to see if things work or not.

When you start thinking about Participants and Proposals as things, you can start to imagine how you could, instead of a function that says “randomly do this action 30% of the time”, you could simulate Participants as individuals: “randomly do this action x% of the time based on this Participant’s personality, which can be profit-seeking/altruistic/balanced”.

The ultimate goal is to put a neural network in the Participant class, tell it to optimize for profit/self satisfaction, and see what behaviour emerges.

TokenBatch badly needed to be a class in order to implement token vesting. A TokenBatch keeps track of vested and unvested tokens, and how many of the vested tokens were spent once unlocked.

Implementing TokenBatch as a class enabled lots of convenience functions that would simplify code using it.

AugmentedBondingCurve, Commons followed the same logic: once I make changes to something, I really don’t want to think about the details of what’s going on under the hood. I just want it to work. Plus having them as “things” fits how people naturally think about the concepts.

It was quite important that the structure of the code be self-explanatory and not require prior understanding. Having the Commons as a class, a thing, even though it meant running some extra state update functions to copy token_supply, funding_pool, collateral_pool out of it so that cadCAD could access them easily, facilitated this.

At the same time, the functional principles (it should be easy to see that a function cannot cause unintended side effects; functions are things that modify data, which remains unchanged otherwise) that cadCAD was based on are still valuable.

Classes should be used to abstract complexity away and fit the code to how a normal human might think of the concepts. For everything else, stick to the functional paradigm.

Walking Through The Codebase

simulation.py is where the cadCAD simulation is setup. simrunner.py, simrunner.ipynb are simply frontends to run this from the CLI or Jupyter notebook.

policies.py: Here live all the policies. They are just simple functions, but there are so many of them, so they’re grouped under classes with @staticmethod. When writing or changing them, do create a corresponding unit test in policies_test.py.

The idea is if you decide one day that there should be a different way of deciding whether a new Participant joins the Commons, you create a p_desc_of_your_strategy_here() policy under GenerateNewParticipant. Please, don’t just edit p_randomly().

entities.py: Proposal and Participant live here.

Normally in cadCAD system-dynamics style simulations the policy functions decide what happens in the simulation, but the code here is more of an ‘agent based modeling’ simulation so the policies also ask the Proposals and Participants what they will do.

System dynamics style: There is a 30% chance of a new Proposal being created every round. Assign it to a random Participant.

Agent based style: Each Participant is asked if they want to create a new Proposal – they have a 30% chance of saying yes.

Since Proposal, Participant have tunable behaviour and thus need configuration parameters, but cannot (or maybe, should not? – because the concept of a Proposal/Participant is unrelated to cadCAD in general) know about the existence of cadCAD and its params variable, their tunable constants are defined in a different place, config.py.

I mentioned before that one reason to have classes is to allow the linter to introspect operations and warn us if we’re doing something stupid. In practice this didn’t happen so much because we still have to use network.nodes[0]["item"] which could be anything. It could be useful to replace all instances of network.nodes[0]["item"] with a function from the data layer called get_participant(idx: int) -> Participant, which will make it clear to the linter that we are going to operate on a Participant now.

network_utils.py: This is the data layer. It translates business logic operations to network.DiGraph operations.

Simply put, when we want to add a participant to the network, that’s all we want to think about – we don’t want to have to remember “this is how to use network.DiGraph to accomplish this; oh and remember to setup the influence and support edges for this new Participant”.

As time goes by this has become a huge collection of assorted functions which has become a pain to import everywhere – I’ve been considering rolling them into a class with methods, but I need to see how much it disturbs the cadCAD functional paradigm. For now it’s not such a big problem. As mentioned before, it could use a simple get_participant(idx: int) -> Participant type of function which not only informs the linter, but does a type check so it can guarantee we’re not actually operating on a Proposal.

Quirk 1: object oriented weirdness in simulation.py

initial_conditions = {
        "network": network,
        "commons": commons,
        "funding_pool": commons._funding_pool,
        "collateral_pool": commons._collateral_pool,
        "token_supply": commons._token_supply,
        "policy_output": None,
        "sentiment": 0.5
    }

This is the initial state condition of the simulation, which is available later in the state update functions as s. As you can see, three of these state variables are actually copied from commons, and if we were to proceed as normal like in most cadCAD simulations, these variables would get out of date. Which is why whenever commons changes, we need to copy them out from commons into s with these state update functions – notice how update_avg_sentiment() is similar.

def update_collateral_pool(params, step, sL, s, _input):
    commons = s["commons"]
    s["collateral_pool"] = commons._collateral_pool
    return "collateral_pool", commons._collateral_pool

def update_token_supply(params, step, sL, s, _input):
    commons = s["commons"]
    s["token_supply"] = commons._token_supply
    return "token_supply", commons._token_supply

def update_funding_pool(params, step, sL, s, _input):
    commons = s["commons"]
    s["funding_pool"] = commons._funding_pool
    return "funding_pool", commons._funding_pool

def update_avg_sentiment(params, step, sL, s, _input):
    network = s["network"]
    s = calc_avg_sentiment(network)
    return "sentiment", s

Quirk 2: two state update functions depend on the output of one policy but change the same variable

    {
        "policies": {
            "which_proposals_should_be_funded": ProposalFunding.p_compare_conviction_and_threshold
        },
        "variables": {
            "network": ProposalFunding.su_make_proposal_active,
            "commons": ProposalFunding.su_deduct_funds_from_funding_pool,
            "policy_output": save_policy_output,
        }
    },
    {
        "policies": {},
        "variables": {
            "network": ParticipantExits.su_update_sentiment_when_proposal_becomes_active,
        }
    },

The decisions made in ProposalFunding.p_compare_conviction_and_threshold are needed by ProposalFunding.su_make_proposal_active and ParticipantExits.su_update_sentiment_when_proposal_becomes_active, which both update the same state variable, network. This makes things awkward, because even though I could combine them into one function, the truth is they are separate mechanisms and this would make the code untidy.

The solution is to save the output of the policy into the state variables dict s instead, so that the result of ProposalFunding.p_compare_conviction_and_threshold is still available outside of that state update block.

def save_policy_output(params, step, sL, s, _input):
    return "policy_output", _input

Summary and Intentions

All code documentation runs the risk of being quickly outdated – but the reason I wrote this post was to keep the original principles behind the code clear even as it changes in the future. Plus, a snapshot in time that explains context is always useful.

What is Commons Stack and the Token Engineering Commons?

One of the first organizations I heard about in the token engineering space was the Commons Stack. It wasn’t really a company per se, it was more of an organization, one that, like most blockchain layer 1 projects, was spread throughout the globe with no center.

Remember the 2016 DAO that forked Ethereum? Some of the Commons Stack members were involved there too.

Anyway, the Commons Stack is a community designing a type of decentralized autonomous organization called a Commons. A Commons organization uses new token engineering concepts to make donating to public goods more than just a donation.

Wait but why

Let’s take a short break from all this new age blockchain stuff to come back to the real world. Society, or should I say, the current economy, doesn’t really reward certain things for the value they provide.

The environment, for example, is constantly being abused. Most plastic isn’t really recycled, but is marketed as being recyclable so consumers will continue buying it. Think your electronic devices are being recycled? Think again, it gets shipped by the container into other countries’ landfills. Nobody’s got time to separate the components into their reusable raw materials, and even if they did, how could they compete with the rate finished products are being produced? Oceans are being polluted, forests cut down, you name it. And it’s just cheaper to keep doing so – the economic incentive is to continue abusing the environment.

Open source software is another thing that is constantly undervalued and abused, unless a successful capitalist company chooses to sponsor its development (because it relies heavily on it). Amazon is well known for hosting open source software, profiting heavily from it, and not giving back to the software project. And while I was working at a fintech startup, the Theo de Raadt (of OpenSSH, OpenBSD) came calling. Apparently he was looking to save fees on donations.

Imagine, the author of the world’s most secure OS and software that everybody uses to control servers, in a financial state where he has to worry about transaction fees.

I could go on, but this dynamic arises because of 2 things:

  1. making money generally involves controlling a scarce resource, making sure nobody else has access to it, and thereby charging money for it – hence patents, research silos, walled gardens, closed behaviour and abuse of free resources. After all, if there’s something free that you can do anything with, you’d just take it, right?
  2. There are many types of intangible values – reputation, power, trust, stability, the happiness of a community, the health of an environment/ecosystem, animal welfare. And you can’t put a number on any of these. We can only describe one type of value – monetary, and this disfigures our humanity and makes us do terrible things.

The Idea Behind a Commons

Funding: Augmented Bonding Curve

Much like Bitcoin, having your own token can serve as funding and denote membership. If you do work for the Bitcoin network, you get rewarded in BTC; if you do work for a Commons, you get paid in its token, let’s call it CTOKEN.

Since this is too important to trust humans with, we let a program handle the problem of token supply, just like Bitcoin. It sits on a blockchain and thus is tamper-resistant. We call it a bonding curve.

What a Bonding Curve Does

  1. When you put in USD, the bonding curve creates new CTOKEN, thus increasing the total supply.
  2. When you want to take out USD, the bonding curve destroys (“burns”) that CTOKEN, thus decreasing the total supply.

So it converts from one form of value (USD) to another (CTOKEN), except that it also decides the price of CTOKEN based on a curve. Besides determining the supply and converting a form of value, it also solves the problem of matchmaking when someone wants to buy and there’s no one willing to sell/vice versa.

USD coming into the Bonding Curve is split into 2 pools:

  • Funding pool: pays the people running the Commons
  • Collateral/reserve pool: if someone wants to sell his tokens, pay him USD from this pool

As you can see, the reserve/collateral pool backs the value of the token. If you sell your CTOKEN to the bonding curve, it will destroy (“burn”) the CTOKEN and you’ll get USD back. Specifically, you’ll get more USD back than the next person who sells after you, because it is a Bonding Curve after all, and the collateral pool is simply a fraction of all the money that ever got invested in the Commons.

x-axis: DAI/USD deposited; y-axis: total token supply.
Mess with this too much and it becomes a Ponzi scheme!

The combination of the bonding curve and these two pools is called an Augmented Bonding Curve.

Decision making

Those who have tokens can vote. Those who have more tokens have more voting power. But what if your vote on a particular proposal’s power also increased over time, discouraging people from switching at the very last minute? This is called Conviction Voting.

In the Commons, people vote (using CTOKEN) on which projects should receive how much funding (also denoted in CTOKEN). And yes, you can vote on yourself, to say that you should receive funding!

Test Driving the Concept: The TE Commons

1 DAI is equal to 1 USD – this peg is kept by the MakerDAO

Of course, there is a need to test if such an organization would work, and how people might game the system! That’s the Token Engineering Commons, a Commons that funds projects in the Token Engineering space.

Here’s how it should play out

  1. Donors put money into the TE Commons, and get TEC from the bonding curve.
  2. Donors vote upon projects using TEC, and projects receive that TEC as funding. To actually pay themselves, they sell the TEC for USD, but not all of it! Why? So that they can vote for themselves in the future, to steer funds their way!
  3. End result: the price of the token gets pushed up (remember the bonding curve), and for project owners, it is a fine balance between retaining enough voting power and getting enough funds.

But this is just the worst case scenario.

What if an economy formed around the TEC token, giving it extra value beyond voting rights?

Let’s go back in time. In the beginning there was Bitcoin. Developers poured their time into it, but there was nobody paying them. Instead, if you ran the Bitcoin node (participated), you had a chance of being paid in BTC. That was all you got! Within that niche group of people, Bitcoin had some value – worth a pizza, maybe. Outside of that circle, Bitcoin was worthless. Today projects still pay with their own coin. If you work for Decred, for example, you will only get paid in DCR, funded from the 10% mining split.

As outside people slowly started to perceive Bitcoin as having value, projects like Ethereum launched ICOs. Now, the Ethereum Foundation has 2 kinds of reserves: BTC and ETH, and it can sell both for fiat to pay its developers. ETH used to be worthless, just 5 USD. Today the Ethereum Foundation can send ETH around to fund projects because it’s worth 400 USD.

Today, people in economically unstable countries already perceive Bitcoin as digital gold – they transact in it every day. Wall Street in particular is starting to see Bitcoin as a great store of value that retains its purchasing power even as more USD is printed.

One day, perhaps Ethereum will be perceived as digital oil – something you need to buy to power your interactions with web3 apps. Sure, Web 2.0 was free – but not really, you were paying for it with your privacy, and those companies won’t have an incentive to care about you when they get big enough. Just look at Google, Apple, Amazon, Facebook.

Much of this, of course, has to do with perception, selling yourself, being able to convince people to join your project. Decred or Litecoin or any other coin with a fixed supply and healthy consensus mechanism could also be a store of value, and you can hold DeFi, ICOs or STOs or whatever they’re called today on Tezos, EOS, aeternity, NEO, NEM etc as well. But people flock to Bitcoin because it was the first, has the largest community, investment and hashpower behind it. Same for Ethereum.

What could people perceive a Commons Token as? I have no idea – but as you can see, the future is going to be a strange one.