A quick reminder on why we can’t depend on the block reward until 2140

I used to argue that we shouldn’t worry about a lack of fees because blocks were subsidised ‘until 2140’. I thought it would be a problem ‘for the future to worry about’. I figured the world would have changed so much by then that there was no point limiting ourselves now, and that they’d be able to ‘figure it out’.

Then I realised that I was wrong – 2140 isn’t the year to worry about.

We are now closer to 2024 than we are to 2009 when Bitcoin began. In 2024 the block reward will fall to just 3.1BTC.

In 2028 it will be 1.6BTC, and in 2032 we’ll be at less than 0.8BTC. This reward will continue falling for another 108 years until block subsidies end in 2140 – the year I naively perceived as the time the lack of block reward would become ‘a problem’.

A simple look at the numbers tells us this is something we need to consider a lot sooner.

In February I analysed how much the fees had increased. On 6th March 2015 the average cost over 10 days for 1MB of transactions was $67.

On 6th March 2017, that number has increased to $1912.

This is supply and demand in action, literally how free markets work. When there is an abundance of something it becomes extremely cheap. When in short supply it becomes expensive.

People argue that if we made the blocks bigger so they weren’t full, the miners would be able to include more transactions so they could receive more reward.

Let’s take an objective look at the facts. The numbers show us that when blocks aren’t full miners have, even in recent years with high Bitcoin value, accepted less than $67 per MB.

Only when the blocks have become full has the pressure on fee prices skyrocketed to $1912 per MB.

This means that at $67 per MB to earn the same fees miners would right now need to be mining 29MB blocks.

Even if this didn’t have centralisation issues, the transaction volume just isn’t there yet to achieve it.

Another argument can be made that the current 12.5BTC reward means this is less of an issue right now – and that is a very valid point. We do have room to increase the block capacity without a short term fear over mining incentive, and we should if the miners will let us.

The concern is that in 7 years time we’re going to be mining blocks with just 3.1BTC of reward. The simple and indisputable fact is that the less reward there is for mining a block the less secure the network is, and this is a problem we’re going to face a lot sooner than 2140.

If you liked this article please consider donating now to 1H2zNWjxkaVeeE3yX6uVqng5Qoi6gGvYTE

How user activated soft forks (UASF) work and why they might solve the blocksize debate too

For a soft fork to activate miners currently are required to signal their support. For SegWit that requires 95% of blocks over a 2016 block window signalling in favour.

Currently the Bitcoin network has around 6,000 nodes. ~5,000 of those are running Bitcoin Core, and ~3,000 nodes are running Bitcoin Core versions that fully support SegWit (>=0.13.1).

Those 3,000 nodes, despite being SegWit ready, can currently do nothing more than wait for miners to activate it. This just isn’t happening, and they are being denied all the benefits that come along with it.

UASF would mean there is no signalling by miners for activation. Instead a new version of the Bitcoin software is released and installed on nodes which has an agreed activation point in the future.

At the time of writing we’re at block #455855. If we wanted to activate SegWit in 90 days time (approximately 12,960 blocks), the new version of the software could say “from block #468,815 only accept blocks that support SegWit”.

Let’s imagine that by block #468815, 60% of nodes have upgraded to this new version of the software. In addition a number of large exchanges and Bitcoin businesses have also publicly announced their support and upgraded.

This gives miners a conundrum. If they don’t create SegWit compatible blocks from that point all those nodes and businesses will completely ignore the blocks they create. This would prove very expensive.

SegWit blocks will be backwards compatible, as nodes that haven’t upgraded their software will still recognise them as valid.

This means that the odds of SegWit activation succeeding are massively stacked in its favour, as the miners are the only ones who incur an economic cost if they make an incorrect decision about activating. By far the safest option is to mine blocks compatible with SegWit as these will be accepted by 100% of nodes, rather than the 40% of non-upgraded nodes and economic minority.

So that’s the basics. The nodes basically tell the miners that if they don’t play ball they will be ignored. It would take an incredible amount of nerve (or stupidity) for a miner to refuse to mine SegWit compatible blocks when such a huge part of the network will suddenly start invalidating their work.

If just 51% of miners update then SegWit would activate without any issues.

I don’t believe SegWit would struggle to gain miner consensus, but in the unlikely event that over 51% of miner hashrate refused to switch this would create a fork in the network with non-upgraded nodes seeing one version of the blockchain, and upgraded nodes seeing another.

I believe a more likely outcome is that a rival implementation of the bitcoin software would use the activation point as a catalyst and attempt to rally support for and introduce a change of their own.

This wouldn’t be a property of UASF itself, but an activation point creates a natural window of opportunity to force the community to make a choice between rival visions.

Instead of a soft fork, a rival implementation could activate a hard fork block size increase. In addition they would likely need to change how quickly the difficulty is adjusted and hopefully add replay protection.

This would lead to all nodes and miners having a clear choice between two directions for Bitcoin. The likely reason a rival implementation wouldn’t attempt this is because they feared humiliation, but a set activation point would present a perfect opportunity to also force a referendum on the block size debate to those with the conviction to back their belief.

There would likely be just one winner, with an economically insignificant altcoin created on the other side. The block size debate would finally be settled once and for all.

If you liked this article please consider donating now to 1H2zNWjxkaVeeE3yX6uVqng5Qoi6gGvYTE

SegWit facts – Not ‘anyone can spend’ so stop saying they can

Anybody who argues against SegWit because it uses “anyone can spend” transactions is either being disingenuous does not understand what they are talking about.

“Anyone can spend” is just words to describe a transaction which has no conditions attached to how its output can be spent. This has been part of the Bitcoin protocol since… forever.

To prevent old nodes from being excluded from the network, as in the case of a hard fork, SegWit uses a clever trick that enables old nodes to see SegWit transactions by making them appear as ‘anyone-can-spend’.

The fear of the misinformed is basically that the new transactions created by SegWit will be insecure because “it says ‘anyone-can-spend’, duh.”

The problem is, this is just plain false. Segwit is a softfork which means miners introduce new rules. The new rules mean that although SegWit transactions superficially appear to old nodes as ‘anybody-can-spend’ – the rules dictate that these transactions cannot be spent without a valid signature.

The ‘worst case scenario’ is that miners could mine a block that breaks these rules and old nodes would still recognise it as valid. It is foolish to consider this a genuine risk.

The miner would be wasting all their resources mining an invalid block, and for what, to make a few remaining old nodes think some SegWit transactions had a different owner? What would they accomplish?

The answer is nothing at all. Their block would be orphaned almost immediately. Nobody would be affected except the miner.

Anybody who persists in making this argument is lying or dim. By exposing themselves as either of these things they completely undermine their cause and can be safely dismissed.

No politics, pure numbers: if the blockchain grew at its current rate and hard drive prices fell at their current rate – what storage cost in 20 years?

I decided to look objectively at one of the costs of scaling on chain. This is pure numbers, no politics.

Storage is currently a very small part of the cost of running a full node. The block chain is around 100GB. With storage currently available from as little as $0.019 per GB, that’s just $1.90.

This is just a look at storage alone, it does not consider bandwidth, CPU or electricity requirements of operating a full node.

According to http://www.statisticbrain.com/average-cost-of-hard-drive-storage/ the 6 years from 2010-2016 saw the cost per GB fall from $0.09 to $0.019 – a fall of 79%.

This equates to an average fall in price of 23% each year.

From 1995 to 2000 the cost per GB fell by 99% – an average of 60% each year.

This represents a fall in the rate at which prices drop by 5% each year on average.

I will perform two calculations:

  1. Assuming the price continues to fall by 23% each year
  2. Reducing the rate at which the prices fall by 5% each year

I will also estimate blockchain growth based on actual current levels of growth.

For example the blockchain sizes: 2013: 13,490MB, 2014: 27,840MB, 2015: 53,700MB equates to an increase of 2014: 14,350MB, 2015: 25,860MB, so 2015 saw an increase of 80%. Through 2012 to 2016 the blockchain transactions grew by an average of 88.6% each year (used in calculations).

This gives the following results for the next 20 years:

Year ending Blockchain Size (MB) Annual Increase (MB) Average block size (MB) Cost per GB (USD) Storage cost Cost per GB (5% slowdown) Storage cost
2012 4270 3637 0.1 0.06 $0.25 0.06 $0.25
2013 13490 9220 0.2 0.05 $0.66 0.05 $0.66
2014 27840 14350 0.3 0.03 $0.82 0.03 $0.82
2015 53700 25860 0.5 0.022 $1.15 0.022 $1.15
2016 96345 42645 0.8 0.019 $1.79 0.019 $1.79
2017 176759 80414 1.5 0.01463 $2.53 0.0148485 $2.56
2018 328391 151633 2.9 0.0112651 $3.61 0.011766323 $3.77
2019 614318 285927 5.4 0.008674127 $5.20 0.009446048 $5.67
2020 1153477 539159 10.3 0.006679078 $7.52 0.007676459 $8.65
2021 2170144 1016667 19.3 0.00514289 $10.90 0.006310283 $13.37
2022 4087226 1917083 36.5 0.003960025 $15.81 0.005243396 $20.93
2023 7702182 3614956 68.8 0.003049219 $22.94 0.004401214 $33.10
2024 14518739 6816557 129.7 0.002347899 $33.29 0.003729648 $52.88
2025 27372412 12853672 244.6 0.001807882 $48.33 0.003189008 $85.24
2026 51609997 24237585 461.1 0.001392069 $70.16 0.002749851 $138.59
2027 97313708 45703711 869.6 0.001071893 $101.87 0.002390104 $227.14
2028 183495118 86181409 1639.7 0.000825358 $147.90 0.002093056 $375.06
2029 346003480 162508363 3091.9 0.000635526 $214.74 0.001845931 $623.73
2030 652438106 306434626 5830.2 0.000489355 $311.79 0.001638882 $1,044.21
2031 1230267939 577829833 10993.7 0.000376803 $452.70 0.001464248 $1,759.20
2032 2319855365 1089587426 20730.4 0.000290138 $657.30 0.001316023 $2,981.43
2033 4374440803 2054585438 39090.3 0.000223407 $954.37 0.001189464 $5,081.29
2034 8248679087 3874238284 73710.8 0.000172023 $1,385.71 0.001080796 $8,706.19
2035 15554153957 7305474870 138993.1 0.000132458 $2,011.98 0.000986992 $14,992.02
2036 29329755549 13775601592 262092.9 0.000101992 $2,921.30 0.000905613 $25,938.87
2037 55305780734 25976025185 494216.6 7.85342E-05 $4,241.60 0.000834677 $45,080.52

Assuming the optimistic scenarios that Bitcoin continues to achieve the levels of growth it has the last 4 years for the next 20, the cost of storage hardware alone to operate a full node would be $4,241.60 by 2037 – assuming the price of storage continues to fall rapidly.

If the rate at which storage gets cheaper continues to fall by around 5% each year, the cost for storage hardware alone would be $45,080 by 2037.

That’s just the numbers based on past performance. Past performance is no guarantee of future results.

Like this research? Please consider donating to 1H2zNWjxkaVeeE3yX6uVqng5Qoi6gGvYTE and I’ll try and find the time to do more.

I analysed 24h worth of transaction fee data and this is what I discovered

The idea of allowing of allowing miners to increase the blocksize with demand is an interesting one but creates a problem: miners can game such a system by filling the blockchain with transactions at zero cost.

Asking miners to pass on a portion of the transaction fees to the next block would solve this problem as the miner would then incur a cost. Unfortunately this introduces a new problem: miners are incentivised to accept fees away from the blockchain (out of band).

What if there were a way that could not be manipulated without an economic cost to the miner for doing so?

I decided to do a little analysis of the last 24 hours worth of transaction fees:

Satoshis/byte group Transactions MB used BTC fee Cumulative MB % Cumulative BTC %
0 81 0 0 0.0% 0.0%
1-10 686 0.3 0.02 0.2% 0.0%
11-20 2588 1.2 0.18 1.0% 0.2%
21-30 3755 1.7 0.43 2.2% 0.5%
31-40 2677 1.2 0.42 3.0% 0.8%
41-50 13654 6.3 2.84 7.2% 3.1%
51-60 15052 7 3.85 11.9% 6.2%
61-70 108748 50.2 32.63 45.8% 32.4%
71-80 27432 12.7 9.53 54.3% 40.0%
81-90 32969 15.2 12.92 64.6% 50.4%
91-100 31188 14.4 13.68 74.3% 61.3%
101-110 13704 6.3 6.62 78.6% 66.7%
111-120 38051 17.6 20.24 90.4% 82.9%
121-130 4510 2.1 2.63 91.8% 85.0%
131-140 6294 2.9 3.92 93.8% 88.1%
141-150 2793 1.3 1.89 94.7% 89.7%
151-160 3021 1.4 2.17 95.6% 91.4%
161+ 14015 6.5 10.73 100.0% 100.0%
Totals 321218 148.3 124.7

I collected data on evening of 10th January 2017 . The transaction data came from https://bitcoinfees.21.co/.

I used an average transaction fee of 462 bytes from here, and also had to average out satoshis/byte groups. Despite multiple data sources and rounding I was happy with how closely these calculations tallied up with the 24 hour stats at https://blockchain.info/stats:

Blocks Mined 150
Total Transaction Fees 118.84277985 BTC
No. of Transactions 313900

What does the data tell us?

It compares fee level groups in two ways:

  1. How much space they use
  2. What fees they generate

This shows how the fees generated and capacity used were distributed:Distribution of Satoshis per byte

A fairly expected distribution, no odd outliers.

The cumulative BTC column in our chart shows us that the median economic fee is around 90 satoshis/byte.

What’s the idea?

I wondered if we could use this data to generate a threshold to calculate whether the network is ready for a block size increase.

Let’s say we need blocks to be considered 75% full for miners to activate an increase.

If we did that currently there is no cost to the miners to fill every block with transactions and force an increase as they get all the fees reimbursed.

What if we discounted all transactions of a fee below half the economic mean for the threshold? That would equate to around 45 satoshis/byte.

From the table we can see that around 95% of transactions are above this level. Since the blocks are consistently over 95% full at the moment and we’re counting 95% of the transactions toward the threshold, 0.95 * 0.95 gives us a figure of over 90% – well above the required level.

How could we stop miner manipulation?

Well, technically we can’t. We can disinsentivise manipulation by making it expensive though.

I’ve taken the original figures and reduced them by 25% to simulate less full blocks. I’ve then filled this freed up space with the cheapest but non-free transactions. This shows what it would look like if miners themselves tried to fill blocks with cheap transactions to activate a block size increase.

Manipulated Distribution of Satoshis per byte

As you can see the manipulation is obvious but the protocol cannot see charts. We need to calculate how full the blocks would be considered for activation purposes in this scenario.

Since 25% of transactions are in the 1-10 satoshis/byte group they are discounted from the calculation.

Infact only around 72% of the transactions are above the 45 satoshis/byte required. 0.72 * 0.95 gives 68.4% full blocks for activation purposes – short of the 75% threshold.

Getting to the point

There are benefits to allowing miners to signal for block size increases. It’s quicker and less risky than hard forking each time for a start.

The problem of miners accepting fees out of band to avoid paying to signal for an increase can be completely mitigated – if miners do not vote for a block size increase they can keep 100% of their transaction fees as normal.

If there is consensus, miners passing transaction fees to the next block will average itself out. Only miners who consistently signal for a block size increase against consensus will incur an economic cost.

Signalling to increase the block size while simultaneously receiving fees out of band is counter-productive as those transactions will not count towards the reaching the activation threshhold.

While it may superficially appear the case, zero and low fee transactions are not punished under this implementation. They are effectively just factors in a scoring system when miners are suggesting additional block space is required.  Remember that miners currently cannot signal for an increase at all, so this does not alter any fundamentals or discourage low fee transactions when there is capacity.

Thank you for reading, let me know any thoughts in the comments!

By 2017, Bitcoin had calculated more hashes than there are stars in the observable universe… by this incredible multiplier

I’ve sadly not had time to blog as much as I would like recently, but I thought I would share with you all some interesting facts about the number of hashes performed by the Bitcoin network by the end of 2016.


I built a spreadsheet with the difficulty for every single day the Bitcoin network has been in operation up to 31st December 2016.

I used the following formula:

=SUM(DIFFICULTY * POWER(2,32) / 600)

This gave the hashes per second for each day’s Bitcoin difficulty level.

I then multiplied this by 86400 seconds to calculate the total hashes for each day, and added all 7 years together to give the total of hashes calculated by the Bitcoin network.

For an explanation of hashing, see my previous article.

So, how many were there?

By the end of 2016 the Bitcoin network had cumulatively calculated around 6.27683E+25 hashes.

That’s 62,768,300,000,000,000,000,000,000 hashes.

For perspective, a group of researchers estimated the number of grains of sand in the world – every grain of every desert and beach – at 7.5E+18 grains of sand. That’s seven quintillion, five hundred quadrillion.

This means Bitcoin has already calculated 8,369,107x more hashes than there are grains of sand on planet earth and currently breezes through that amount again as little over each three seconds pass.

When people say that Bitcoin’s are “based off of nothing” they don’t understand that each Bitcoin is generated at huge computational and electrical cost. A hash of sufficient difficulty is an exceptionally scarce resource.

Bitcoin is secured by a beautiful combination of mathematics and the laws of physics: hashing cannot be cheated.

While altcoins can easily copy Bitcoin’s design, they can’t even begin to come close to its 7 years of accumulated hashes. The hashrate is what makes Bitcoin untouchably secure, as to attack the network requires sustained control over enough resources to generate 51%+ of the network’s hashrate. Even ignoring the vast hardware requirements, doing so would require an electrical supply sufficient to power a small country. The bitcoin network has long performed more PetaFLOPS than the top 500 supercomputers on the planet combined. 

With the transition toward specialised mining hardware, even if an attacker managed to build a botnet of hacked laptops capable of a generous 30MH/s average hashrate, they would need to hack over 75 billion laptops to mount a 51% attack. That’s over 10x hacked laptops for every person on the planet.

Breaking it down by Bitcoin

At midnight on 31st December 2016, the total number of Bitcoins totalled 16,077,350.

That means every single Bitcoin on average, was produced by 3.9041447e+18 hashes.

That’s 3,904,144,700,000,000,000 hashes

Considering each Bitcoin is divisible into 100 million satoshis, each of the smallest unit of Bitcoin is on average a product of 39,041,447,000 hashes.

That’s 39 billion hashes per 0.00000001 of a Bitcoin.

There are estimated to be 1 billion trillion stars in the observable universe. Watch this video to get a sense of perspective, then consider that the Bitcoin network has calculated more hashes than this by a multiplier of  62,768.

That’s right, the Bitcoin network has already calculated over 67,768x more hashes than there are stars in the known universe! Just wow.

*All calculations correct to the best of my knowledge, but I wrote this article in a rush and there may be errors! Please let me know if you spot any and I will correct them.

Angel: scaling Bitcoin through an Internet of Blockchains (IoB)

Bitcoin needs to scale and there are many contradicting ideas on how to achieve this.

Sidechains are an inevitable solution. They allow Bitcoins to be transferred from the main blockchain into external blockchains, of which there can be any number with radically different approaches.

In current thinking I have encountered, sidechains are isolated from each other. To move Bitcoin between them would involve a slow transfer back to the mainchain, and then out again to a different sidechain.

Angel is a protocol for addressable blockchains, all using a shared proof of work. It aims to be the Internet of Blockchains (IoB).

Instead of transferring Bitcoin into individual sidechains, you move them into Angel. The Angel blockchain sits at the top of of a tree of blockchains, each of which can have radically different properties, but are all able to transfer Bitcoin and data between each other using a standardised protocol.

Each blockchain has its own address, much like an IP address. The Angel blockchain acts as a registrar, a public record of every blockchain and its properties. Creating a blockchain is as simple as getting a createBlockchain transaction included in an Angel block, with details of parameters such as block creation time, block size limit, etc.

Mining in Angel uses a standardised format, creating hashes which allow all different blockchains to contribute to the same Angel proof of work. Miners must hash the address of the blockchain they are mining, and if they mine a hash of sufficient difficulty for that blockchain they are able to create a block.

Blockchains can have child blockchains, so a child of Angel might have the address aa9, and a child of aa9 might have the address aa9:e4d. The lower down the tree you go, the lower the security, but the lower the transaction fees. If a miner on a lower level produces a hash of sufficient difficulty, they can use it on any parents, up to and including the Angel blockchain, and claim fees on each.

There are so many conflicting visions for how to scale Bitcoin. Angel allows the free market to decide which approaches are successful, and for complementary blockchains with different use cases, such as privacy, or high transaction volume, to more seamlessly exist alongside each other.

I wrote this as a TLDR summary for a (still evolving) idea I had on the best approach to scale Bitcoin infinitely, for more detail please check out my previous article.

photo credit: Colonelbogey71 Angel of the North via photopin (license)

Introducing Buzz: a turing complete concept for scaling Bitcoin to infinity and beyond

In this article I will outline an idea I have had for an infinitely scalable, Turing complete, layer 2 scaling concept for Bitcoin. I’ll call it Buzz.

Buzz is influenced by a number of ideas, in particular Ethereum’s VM, sharding, tree chains, weak blocks and merge mining. I’ll be discussing Bitcoin, but the Buzz scaling concept could be implemented on any PoW blockchain, and without Turing completeness.

How does it work?

Buzz is merge mined with Bitcoin and has its own blockchain called Angel which serves as a gateway (through two way peg) between the two different systems. In addition to merge mined blocks, Angel has a second (lower) difficulty which enables more frequent block creation, say every 30 seconds (weak blocks).

Buzz lifts its Turing completeness from Ethereum, taking much of its development but with a few adaptations to enable infinite scaling.

Ethereum’s current plans for scaling is through proof of stake with 80 separate shards. All shards communicate through a single master shard. Each shard has up to 120 validators who are randomly allocated to a shard and vote with their stake to reach consensus on block creation.

Buzz is quite radically different in its approach. It depends upon PoW which is elegant, resilient and has no cap on the number of consensus participants or minimum stake requirements. It avoids a split into an arbitrary number of shards with an arbitrary number of validators, and where all shards are homogenised and lacking diversity, such as different block creation times for different use cases.

In Buzz, each shard is called a wing and has its own blockchain. There is no cap on the number of wings, anybody can create one. Instead of making design decisions, the free market decides whether a wing will succeed or not. If there’s a wing with high transaction volume and high transaction fees, it will attract a higher number of miners and a higher level of security.

Buzz is basically a blockchain tree, since each wing can have multiple child wings, and the Angel sits at the top of the tree overseeing everything and communicating with the ‘other world’ that is the main Bitcoin blockchain.

The higher up the tree you go, the higher the hashrate will be, since a hashrate for a wing is equal to the hashing power of mining on all descendant wings combined with the mining on that wing but no deeper. Data and coins can be transferred up and down the tree between the parent and child wings. How frequently this can occur depends on the difference in hashrate between the child and parent. Nodes can operate and mine on any wing, but are required to also maintain the blockchains of any parent wings, up to and including the Angel.

The process of creating a wing is as trivial as getting a create wing transaction included in an Angel block. As well as being a gateway, Angel also serves as a registry, a little like DNS, keeping track of the properties of all wings.

If you want to create a wing with a block size of 1000MB and creation time of 1 second, that’s no problem. The market will probably decide your blockchain isn’t viable and you’ll be the only node on it.

Each wing will have its own difficulty level. If it was created with a 5 minute block target, its difficulty will be determined through the same process existing blockchains use.

There are no new coins created in this system, the incentive to mine comes from transaction fees.

The Angel has the hashing power of the entire Buzz network. Transactions here are the most secure, but also the most expensive. Angel itself has a limited functionality and conservative block size since this is the only blockchain that every full node is required to process.

How are wings addressed?

Every wing has its own unique wingspace address, a little bit like IP addresses and subnets.

For example, a wing that is a direct child (tier 1) of the Angel might have been registered with the wingspace address a9e. It might have a tier 2 child wing at the wingspace address a93:33, which might have a tier 3 child wing at a93:33:1a, and so on. If you’re familiar with subnets, this is like 255.0.0, 255.255.0 and 255.255.255.

There may be thousands of active and widely used wings, or there may only be a handful of wings which have a huge transaction volume. The free market will decide the tradeoff between hardware requirements, hash rate and transaction fees.

There may be geographically focused wings to benefit from lower latency, and micropayment wings which become the defacto standard for day to day use in a particular country. Travelling may involve moving some coins to participate on the wing where local transactions take place. Such routing could be automated and seamless to the end user.

Instead of trying to second guess a one size fits all solution for blockchain scaling, Buzz takes inspiration from the approach taken by the internet, where an IP address could represent anything from server farm to a raspberry pi, depending on the use requirements. The idea is to just create a solid protocol which enables the consistent transfer of coins and data across the system.

Different types of activity can take place in wings more suited to their use case. The needs of the network now may be completely different in the future, and allowing all wings to have a number of definable parameters and to be optimised for particular use cases (such as storage, or gambling) allows the network to evolve over time.

How are blocks mined?

Miners must synchronise the blockchain of the wing they are mining, and all parent wings up to and including Angel. Nodes can fully synchronise and validate as many blockchains as they like, or operate as light clients for ones they use less frequently.

In order to mine, hashes and blocks are created with a different method to Bitcoin.

The data that is included to generate a hash must also include the wingspace address and a public key.

Instead of hashing a Merkle root to bind transaction data to a block, a public key is hashed. Once a hash of sufficient difficulty to create a block has been found, the transactions data will be added, and then the block contents signed by the private key that corresponds with the public key used to generate the hash.

By signing rather than hashing blockchain specific data, we enable a single hash to be used in multiple wings (as it is not tied to a particular wings’ transactions) as long as it meets the difficulty requirement for each parent.

Since transaction data is not committed to the hash as in Bitcoin (where it is as a result of hashing the Merkle root), there needs to be a disincentive to publishing multiple versions of a block using the same hash signed multiple times. This can be achieved by allowing miners to create punishment transactions which include signed block headers for an already redeemed hash. Doing so means that miner gets to claim the associated fees, and the miner who published multiple versions of a block is punished by losing the reward.

When generating a hash, miners must include the previous block hash and block number for all tiers of wings they are mining on. This will allow all parents wings to have a picture of the number of child wings and their hash power.

Hashing in the wingspace a9e:33:1a, means that if a hash of sufficient difficulty was found, the miner could use it to create a block in the wings a9e:33:1a, a9e:33 and a9e. If the difficulty was high enough to create a block in the Angel, it means that wingspace will effectively ‘check in’ with Angel, and provide useful data so its current current hash rate can be determined and provide an overview of the health of wings. If a wing has not mined a block of Angel level difficulty in x amount of time, the network might consider the wing to have ‘died’.

If you had 30 second blocks in the Angel, over 1 million a year, that means even a wing with just 1 millionth (0.0001%) of the network hashing power should be able to ‘check in’ annually.

It is likely that this check in data will enable miners to identify which wings are the most profitable to mine on, and the network will dynamically distribute hash power accordingly. There will be less need to mine as a pool, since there will be many wingspaces to mine in, which should enable even the smallest hashrate to create blocks and earn fees. Miners can mine in multiple wingspaces at the same time with a simple round robin of their hash power.

Creating and modifying wings

When a create wing transaction is included in an Angel block, the user can specify a number of characteristics for the wing, such as:

  • Wingspace address
  • Wing title and description
  • Initial difficulty
  • Block creation time
  • Difficulty readjustment rules
  • Block size limit
  • Permitted Opcodes
  • Permitted miners

Hashing a public key and signing blocks with the corresponding private key allows us to do something else a little bit different: permissioned wings.

Most wings will be permissionless as blocks can be mined by anybody. However let’s say a casino or MMOG wants to create its own wing. It might want to do this so that it can have properties of a faster block creation time and it can avoid transaction fees by processing transactions for free, since they are the only permitted miner.

By only allowing blocks signed by approved keys, permissioned wings cannot be 51% attacked, and could even mine at a difficulty so low it is hashed by a CPU while retaining complete security. Users will recognise that in transferring coins into a permissioned wing, there is a risk that withdrawal transactions will be ignored, though their coins would not be spendable by anyone else so there is little incentive for doing so. It is up to them to decide whether the benefits outweigh the risks.

The property of permissioned block creation could be used for new wings which are vulnerable to 51% attacks due to a low hashrate. Permitted miners could be added until the wing was thought to have matured to a point where it is more resilient, and the permission requirement could be removed.

Angel transactions can also be created to modify wing parameters. Say changing to a different block creation interval at x block in the future. The key used to create a wing can be used to sign these modification transactions. Once a wing has matured, the creator can sign a transaction that waves their right to alter the wing any further, and its attributes become permanent.

How is this incentivised?

Transactions have fees. By creating blocks miners claim those fees. If you mine a hash of difficulty to generate a hash all the way up to the Angel blockchain, you collect fees on each level for all blocks you created.

There exists the possibility to add a vig, say 20% of transaction fees, to be pooled and passed up to the parent wing. These fees would gradually work their way up to the Angel blockchain, and could be claimed as a reward for merge mined blocks with the main Bitcoin blockchain.

Other ideas

Data and coins can be passed up and down the wings using the same mechanism Ethereum is planning with its one parent shard and 80 children topology. However there is no mechanism to pass data sideways between sibling wings, as sibling wings are not aware of each other.

I wonder however, if wings could be created to accept hashes from multiple block spaces. For example wing BB might accept also hashes, at say a 10 minute difficulty, from the wingspaces for AA and CC.

It would be possible to calculate the required difficulty level because all parent wings up to the Angel must be synchronised, and information about sibling wings will be included in block headers on the parent wing. This potentially creates a mechanism where wings can pass data between each other more efficiently and at lower cost, though I think there may be technical limitations with this system, in particular for transferring coins. This is because as far as the parent is concerned, coins transferred directly from BB to CC seem to have appeared out of thin air when passed back from CC to the parent, as the parent wing cannot see the activities of its children.

Thoughts on Proof of Work

Wings cannot merge mine with the main Bitcoin network without a hard fork so that it accepts hashes in the Buzz format.

This hard fork would enable the PoW to be shared, and offer increased security for both systems. Maintaining a separate proof of work between the systems, however, presents the opportunity to diversify the options available, such as Scrypt, SHA-256, X11 and Ethash.

If you wanted 30 second blocks on Angel with 4 proofs of work methods you could give each proof of work method its own difficulty targeting a 120 second block time.

When creating a wing, particular proofs of work could then be specified for use on that wing. Such as 20% Ethash and 80% Scrypt. This would open up PoW methods to the free market too.


  • Buzz is merge mined and distinct from the main Bitcoin blockchain
  • Recognises that there are infinite potential use cases for blockchains, with varying design requirements.
  • It is impossible to expect every node to process every transaction. There needs to be segmentation.
  • Attempting a one size fits all approach to scaling will lead to suboptimal and restrictive designs for many use cases.
  • Buzz creates a system where different segments with different properties can exist side by side and transfer coins and data, facilitating a free market where the best blockchains can thrive.

Thanks for reading through my idea, I hope the swirl of ideas buzzing around my head make sense now they have been converted into words. For any questions, thoughts or criticisms, please head to the comments.

What on earth is a Merkle tree? Part 2: I get more technical, but hopefully all becomes clearer

I previously written about understanding what Merkle trees are. If you haven’t read it, go and do so now.

I tried to keep it non-technical, and a keen observer would point out that the article better explained the benefits of hashing rather than of Merkle trees. I was trying to explain why Bitcoin benefited from Merkle trees, rather than how they actually worked.

In my previous article, the gist was that hashing allows you to verify that large quantities of data haven’t been changed using a hash, a much smaller amount of data. Merkle trees basically allow you to verify that a particular piece of data was present and hasn’t been manipulated, using only a small number of proofs rather than having to download all the data to check for yourself.

This time, I’m going to have another go at explaining Merkle trees, with the assistance of something we can all relate to… colours. I’m going to create what I’ve called a Merkcolour tree (see what I did there).

A Bitcoin transaction hash is the unique identifier for each transaction. In a block there are is as much as 1MB worth of transaction data. You can hash all the transactions in a block into a single 256-bit hash. However in order to prove that a transaction existed in a block you would have to download all the data used to create the hash, generate your own hash to check it is accurate, and then check transaction you wanted to verify was present in the data.

This would make it possible to store a copy of the 256-bit block hashes without having to store all the transaction data (upto 1MB) for each block – the down side is that in order to check a transaction is present you have to download all the transactions data instead of a small number of proofs, which is much more quicker and efficient as Merkle trees enable.

So how do Merkle trees work?

Well, the unique Bitcoin transaction id, which is actually a hash created by hashing the transaction information together, looks like this:

And if you combine (concatenate) two transaction ids back to back:
cf92a7990dbae2a503184d6926be77fc85e9a9275f4222064ee78eeb2 51d36b2d8f4744017dc79f8df24e2dba7fd28e5fd148c3b01b5f76dede8ef3ac4e5c340

That combined data can be hashed together (using the SHA-256 method) into the following hash:

Instead 512-bits of data for both transactions, the hash is 256-bits of data.

Now lets adapt this logic to colours. Each colour is represented in the same hexadecimal format as a hash, it’s just that a 256-bit hash is more than 10 times longer than a 24-bit colour.

As an example, red is #ff0000, blue is #0000ff.

If we combine those colours together we get #800080.

Instead of 48-bits of data for both colours, the combined colour is 24-bits of data.

A Merkle tree is basically a process by which pairs of hashes are merged together, until you end up with just one, the root. This is best demonstrated with colours in the image below (click it to open).

Merkcolour tree

In the image, we start with 16 different colours (labelled A to P) – these are the leaves. Each colour has been paired with a neighbour and combined together to create a branch. This process is repeated as many times as necessary until you end up with one final colour – the root.

Now, the colour (leaf) we’ve labelled I in the diagram is #ff0000.

If, instead of creating a tree, I simply hashed all the colours together the outcome would be as follows:
#000064#007777#007700#777777 #1f1fff#5cffff#47ff48#ffffff #ff0000#770000#f07000#614600 #ff21b5#ff1f1f#ffff00#d7c880

Hashes (using the MD5 method) to:

If I provided you with that hash, and told you it included the colour #ff0000, the only way I can prove it is by sending you all 16 colours in that order (384-bits of data, removing the #) so you can generate and confirm the accuracy of the hash for yourself.

However, because we’ve created a tree, if you know the root is #8e7560 (a product of all leaves – ABCDEFGHIJKLMNOP), we can confirm that #ff0000 (I) was included using only 4 proofs:

  1. #489194 (ABCDEFGH)
  2. #f58255 (MNOP)
  3. #a95b00 (KL)
  4. #770000 (J)

Let’s start at the top and work our way down to the root:

#ff0000 (I) (we want to confirm) when combined with 4) #770000 (J) gives:
#bb0000 (IJ) which combined with 3) #a95b00 (KL) gives:
#b22e00 (IJKL) which combined with 2) #f58255 (MNOP) gives:
#d4582b (IJKLMNOP) which combined with 1) #489194 (ABCDEFGH) gives:
#8e7560 – which is the correct root!

If we ended up with any other value than the root, we know some of the data we have been given is inaccurate and cannot be trusted.

This means that instead of hashing all the colours together and needing to download 384-bits of data to confirm its accuracy, we are able to download just 4 proofs, or 96-bits of data.

The efficiency gets even bigger the more leaves you have, as each time you double the data (number of leaves), you only add one additional branch which is one extra 24-bit proof for colours, or 256-bit proof for hashes.

For example here’s how much proof data is required to verify the following:
32 colours (768-bits): 5-proofs (120-bits, 84% efficiency)
64 colours (1536-bits): 6-proofs (144-bits, 91% efficiency)
128 colours (3072-bits): 7-proofs (168-bits, 95% efficiency)
256 colours (6144-bits): 8-proofs (192-bits, 97% efficiency)
512 colours (12288-bits): 9-proofs (216-bits, 98% efficiency)
1024 colours (24576-bits): 10-proofs (240-bits, 99% efficiency)

A key distinction of hashes and colours is that hashes are one-way and unpredictable. That means you cannot work out what two hashes were combined to create a hash. The opposite is true for colours, if you know a colour it is possible to work out exactly what combinations of colours could have created it. The Merkcolour tree is only useful for visually demonstrating the concept, if you could reverse engineer a hash in the same way you can a colour it would not be reliable.

Hopefully this makes sense! The key take home message is that instead of having to download an entire 1MB block to confirm that a transaction was in it, you’re able to download a small number of proofs for validation. This makes it a lot easier to verify transactions without having to keep a copy of the blockchain or download large quantities of data, for example on devices such as smartphones where this is less practical.

Merkle trees make the process of verifying data hugely efficient, and while Bitcoin could exist without them, it would require an awful lot more resources such as processing, bandwidth and storage – and running clients on mobile devices would be far less viable.

Thank you Ralph Merkle.

Unintended consequences: Could proof of stake just become no proof of work?

Bitcoin operates through a process known as proof of work (PoW). In order to determine which network participant gets to create the next block (and claim a reward), the process requires the contribution of computer processing power. The more processing (work) you perform, the more likely you are to be rewarded with Bitcoins.

Running this hardware is very expensive, the Bitcoin network is already said to consume as much electricity as the entire country of Ireland.

Satoshi Nakamoto’s vision when he created Bitcoin was that everybody would mine Bitcoin on their computers, all around the world, and that this would decentralise the network.

Unfortunately, CPUs are incredibly inefficient miners. A decent laptop might manage around 14MH/s. A specially designed (ASIC based) AntMiner S9 can achieve 14TH/s – that’s 1,000,000x faster.

Nakamoto could not have foreseen the rise of ASICs when he wrote the Bitcoin white paper. Consequently, instead of being distributed around the world, Bitcoin has faced huge centralising pressure. The number of people required to control Bitcoin can fit around one table. Centralisation provides self perpetuating benefits of easier access to the best hardware and cheapest electricity, though once ASIC chips bump up against Moore’s Law there’s good reason to believe we will see a shift back towards decentralisation.

The holy grail of cryptocurrency would be the security of proof of work, but without the cost and centralisation. I first read about proof of stake (PoS) a number of years ago and, seduced by the idea, immediately invested in PeerCoin, the first cryptocurrency to implement it.

So what is proof of stake?

PoW uses expensive and ‘wasteful’ electricity to try and calculate a hash of sufficient difficulty for the network to accept – enabling that participant to create a new block.

PoS works the other way around. There are a number of proposals, but the basic principle is that each participant can ‘stake’ their coins to create a kernel (type of hash). The bigger the stake, the bigger the chance their kernel will ‘match’. Match what? Well, the blockchain itself generates a random and unpredictable seed based on the data in the proceeding blocks (also by hashing), and the closest matching kernel gets permission to create the next block, and is rewarded for doing so.

As there is no requirement to lock up computer processing power, everybody can run the software on their own machine without the expense and hardware requirements of PoW.

Sounds great, doesn’t it? Well, as with the unintended consequences of PoW, let’s try and foresee how the PoS landscape might evolve.

Under PoW we have seen the rise of pooled mining. Pooled mining has been wildly popular because it makes mining income more predictable.

Think of PoW like a lottery. The more processing power you contribute, the more tickets you get. In Bitcoin there is just one winner every 10 minutes.

If the current bitcoin difficulty didn’t increase, even with the most efficient miner – the 14TH/s Antminer S9, you’d have to enter this lottery for over 2 years on average to win just once.

If you join a pool that has 25% of the hashing power (lottery tickets), then you can expect that pool to win once every 40 minutes on average, and you can then regularly collect your share of the winnings. This is favourable as opposed to running your hardware for years in the uncertain (unlikely) hope of winning the jackpot. Pooled mining in PoW offers no other benefit than making your income more predictable.

Would the same be true of PoS?

There has been testing in PoS experiments that has gotten the block creation time down to 3 seconds per block. This means instead of having 52,560 lottery winners per year in Bitcoin, you could have 7.6 million winners each year. This would certainly reduce, though not eliminate, the appeal of mining pools.

However, in cryptoeconomics we must assume that each participant will always act in their own self interest. Could there be other benefits from PoS pooled mining that are not present in PoW?

In digital security, randomness is very valuable. In PoW the randomness that selects the next block is generated by an external source – all that hardware calculating trillions of random hashes. In PoS this necessary randomness does not come from an external source, it can only come directly from the blockchain itself.

This means a seed generated from previous blocks is used to determine which participant will create the next block.

There are two different data sources you can hash for this randomness. If you included all the contents of the block to generate a hash, this would be a disaster, since there are infinite combinations of block contents. If it was an individual’s turn to create the next block and they had sufficient hardware they would just crunch as many combinations of block contents as possible and hopefully find one that generates a seed matching a kernel they control, allowing them to create the next block and repeat the process again.

This ‘stake grinding’ wouldn’t represent a shift away from proof of work, it would just mean work has taken place but without any proof or transparency.

An alternative option is to only hash header information which cannot be manipulated, such as the block creator’s signature. A potential issue here this is that if you pooled together, you could gain a competitive advantage.

Imagine you’re in a pool with 30% of the staked coins. This should mean that your pool creates 30% of the new blocks. However, let’s speculate an instance where the seed to determine the next block has two pool members as the two closest matches. Imagine the closest match signing a block would create a hash that would allow the next block to be created by a non-pool member, whereas the 2nd closest match would allow the next block to be created by another pool member. If you had sufficient hardware the pool could work to rapidly calculate the best combination of block creators to maximise revenue for the pool.

You can try to mitigate this risk by punishing participants for not creating a block when it’s their turn, but getting the economic balance right to not overly punish people with less reliable Internet connections for example (another centralising pressure) strikes me as an unenviable task.

Ultimately, if the pool has the size and hardware resources to crunch the numbers far enough ahead – it’s still going to game the system when it calculates a combination that will likely generate 10 consecutive blocks, compensating those members who lost out in the process for the greater benefit of the pool.

Such a system could actively incentivise centralisation. The bigger the pool, the greater the advantage. It could create a race to the bottom, since while everyone may recognise this centralisation as undesirable, they also must make an economic sacrifice to avoid participating in it.

Perhaps this centralisation pressure and obscuring of work would be an unintended consequence of PoS. All I know is, the more I study PoS and its goal to provide the security of PoW without the cost, the more a phrase from growing up in Yorkshire comes to mind… “you don’t get owt for nowt”. In other words: there’s no such thing as a free lunch.