Dynamic block size with economic safeguards – could this be the solution that we can all get behind?

One side of the block size debate wants to hand over control of the block size to the miners.  Many fear such an implementation would cause catastrophic failures of consensus, and that miners could even be incentivised to bloat the block size at a rate that overly compromises Bitcoin’s decentralistion.

Others are worried that scaling solutions such as Lightning Network and sidechains will take too long and not achieve sufficient gains, stifling Bitcoin’s network effect and preventing its continued exponential growth.

What if there were a way to simultaneously allow for exponential growth on chain if needed – allowing time for layer two solutions to take some heat off the chain, but also creating an economic disincentive for miners trying to inflate the block size arbitrarily.

Such a solution should allow for an exponential increase in block size if miners were in consensus, but require they face an economic risk when signaling for a block size increase where there was no consensus. Cryptoeconomics is built on incentive game theory, why not introduce it here?

Allowing the block size to change dynamically with demand would reduce the risk of requiring additional contentious block size hard forks and hostile debate. I fear a simple 2MB increase would reignite the debate almost as soon as it was activated, we need to buy as much time as possible.

Any solution is going to be a compromise, but by allowing a few years of exponential growth with strict safeguards and appropriate economic incentives we can hopefully achieve that.

So how do we do it?

My basic idea is for miners to vote in each block to increase the block size.

Allowing for exponential growth would mean that the block size could double every year.

This would be achieved by each of the previous 2016 blocks voting to increase the block size by the maximum amount of 2.7% each time. An increase of 2.7% every 2 weeks would result in an annual block size increase of 99.9% (rounding).

We only need to use 3 bits for miners to vote on block size:
000 = not voting
001 = vote no change
011 = vote decrease 2%
101 = vote increase 1.35%, pay 10% of transaction fees to next block
111 = vote increase 2.7%, pay 25% of transaction fees to next block

Not including any transactions in a block will waive a miners’ right to vote.

Each block is a vote, and the block size change could be calculated by averaging out all the votes over 2016 blocks.

In order to achieve an increase in block size, the blocks must also have been sufficiently full to justify one. Transactions with no fee and perhaps outliers far from the mean tx fee/kb should perhaps not be included.

By asking miners to pay a percentage of their transaction fees to the miner of the next block, you discourage miners from stuffing the blocks with transactions to artificially inflate the block size.

If miners are in unanimous agreement that the block size needs to increase, the fees would average out and all miners should still be equally rewarded. Only miners trying to increase the block size when consensus is not there would incur a cost.

There should be a limit on the maximum increase, perhaps 8MB. This isn’t a permanent solution, it is just to create time for Bitcoin to progress, and then re-evaluate things further down the line. Combined with SegWit this should provide a reasonable balance between satisfying those who are worried about missing out on exponential growth for a few years if LN and other solutions are not as fast or effective as hoped.

This is my rough idea for trying to find a compromise we can all get behind. Please let me know any thoughts or suggestions in the comments.

By 2017, Bitcoin had calculated more hashes than there are stars in the observable universe… by this incredible multiplier

I’ve sadly not had time to blog as much as I would like recently, but I thought I would share with you all some interesting facts about the number of hashes performed by the Bitcoin network by the end of 2016.

Method

I built a spreadsheet with the difficulty for every single day the Bitcoin network has been in operation up to 31st December 2016.

I used the following formula:

=SUM(DIFFICULTY * POWER(2,32) / 600)

This gave the hashes per second for each day’s Bitcoin difficulty level.

I then multiplied this by 86400 seconds to calculate the total hashes for each day, and added all 7 years together to give the total of hashes calculated by the Bitcoin network.

For an explanation of hashing, see my previous article.

So, how many were there?

By the end of 2016 the Bitcoin network had cumulatively calculated around 6.27683E+25 hashes.

That’s 62,768,300,000,000,000,000,000,000 hashes.

For perspective, a group of researchers estimated the number of grains of sand in the world – every grain of every desert and beach – at 7.5E+18 grains of sand. That’s seven quintillion, five hundred quadrillion.

This means Bitcoin has already calculated 8,369,107x more hashes than there are grains of sand on planet earth and currently breezes through that amount again as little over each three seconds pass.

When people say that Bitcoin’s are “based off of nothing” they don’t understand that each Bitcoin is generated at huge computational and electrical cost. A hash of sufficient difficulty is an exceptionally scarce resource.

Bitcoin is secured by a beautiful combination of mathematics and the laws of physics: hashing cannot be cheated.

While altcoins can easily copy Bitcoin’s design, they can’t even begin to come close to its 7 years of accumulated hashes. The hashrate is what makes Bitcoin untouchably secure, as to attack the network requires sustained control over enough resources to generate 51%+ of the network’s hashrate. Even ignoring the vast hardware requirements, doing so would require an electrical supply sufficient to power a small country. The bitcoin network has long performed more PetaFLOPS than the top 500 supercomputers on the planet combined. 

With the transition toward specialised mining hardware, even if an attacker managed to build a botnet of hacked laptops capable of a generous 30MH/s average hashrate, they would need to hack over 75 billion laptops to mount a 51% attack. That’s over 10x hacked laptops for every person on the planet.

Breaking it down by Bitcoin

At midnight on 31st December 2016, the total number of Bitcoins totalled 16,077,350.

That means every single Bitcoin on average, was produced by 3.9041447e+18 hashes.

That’s 3,904,144,700,000,000,000 hashes

Considering each Bitcoin is divisible into 100 million satoshis, each of the smallest unit of Bitcoin is on average a product of 39,041,447,000 hashes.

That’s 39 billion hashes per 0.00000001 of a Bitcoin.

There are estimated to be 1 billion trillion stars in the observable universe. Watch this video to get a sense of perspective, then consider that the Bitcoin network has calculated more hashes than this by a multiplier of  62,768.

That’s right, the Bitcoin network has already calculated over 67,768x more hashes than there are stars in the known universe! Just wow.

*All calculations correct to the best of my knowledge, but I wrote this article in a rush and there may be errors! Please let me know if you spot any and I will correct them.

Why I’m massively in favour of a hard fork block size increase, and also massively against one

I made some reddit posts that recently have been interpreted as my being in favour of small blocks and not raising the block size limit.

This is not my position at all. I’m making the important case that Bitcoin cannot rely on on-chain scaling alone. Satoshi mentioned Moore’s law in the white paper. These were very compelling comments, and for my first few years following Bitcoin it seemed reasonable that Bitcoin could scale on-chain indefinitely.

Unfortunately global propagation is harder than it first seemed when blocks were tiny, and on-chain scaling is not as viable as first thought. Moore’s law alone is not our scaling saviour.

That said I’m not opposed to a hard forks to increase the block size – I think they are necessary. My concern is at hard forks being seen as an easy solution to scaling.

If I seem like more of a small blocker than I am it’s because I’m trying in my mind to balance out the community by pushing the small block cause. I want people to realise that on chain scaling has real implications and is not a long term solution.

I’m incredibly sympathetic to the arguement that we need Bitcoin to be attractive, and low transactional fees are something that first attracted me to Bitcoin. However we’ve also got to be careful of precedent.

The block size debate is more than technical – it is about the politics and future direction of bitcoin.

If we head in the wrong direction and become dependent upon bigger and bigger blocks, there is a genuine risk we embark on a slippery slope and slowly erode what makes Bitcoin special.

I’m not convinced anyone is using Bitcoin at the moment to buy coffee. I’m also sympathetic that we want to make Bitcoin accessible and that lower fees helps the poorest participate, but we need to be cautious.

Bitcoin’s decentralised nature is our democracy, and good democracy requires checks and balances. It might not feel like it at times, but the passionate debate and resistance over changing the status quo is giving us exactly that.

No matter what you think of your opponents, we’re all playing an important role in Bitcoin’s governance. There has never before been anything like it. Fierce debate over monetary policy has taken place behind closed doors throughout most of history, now we all get our say.

I am not opposed to a block size increase, I am opposed to a block size increase being easy. Not because I think bigger blocks will ruin Bitcoin, but because I think lots of block size increases would ruin Bitcoin.

We need to put up a fight against anything that could change what Bitcoin currently is. That doesn’t mean we shouldn’t ever change Bitcoin, but that such changes should have stood up to immense scrutiny.

You might be massively in favour of increasing the block size, but you should also be thankful in the face of resistance. If Bitcoin ever becomes easy to change it becomes easy to break.

That’s why I’m simultaneously opposed to a block size increase while also being in favour of one.

Yes I’m a paradox, but I’m quite happy that way.

Bitcoin is under siege! We need to fight against post-Truth propaganda, and a plan B to reclaim Bitcoin if taken

We now have a completely divided community where people believe nonsense. A sizable minority have now been convinced that SegWit is dangerous and creates an insurmountable technical debt. These people generally have no development experience, and just blindly repeat misinformation despite the protests of those who do. The vitriol they have been fed is a contagion that is spreading, while others just want to block SegWit out of spite.

I recently tried to compile a list of developers who were opposed to SegWit. The exhaustive list consisted of four. That’s right… four. From the stink kicked up by the anti-SegWit brigade you’d think this number would be far higher.

If you repeat a lie often enough, people will believe it. There is a real risk that enough of the non-technical community now believes SegWit is too complicated and risky to prevent its activation. For the technical community this is a total non-debate, actual developers opposed to SegWit are the flat earth society of Bitcoin. Disagree with this? Try to list developer names and credentials opposing SegWit and you’ll soon realise how feeble the technical opposition is.

In addition to SegWit hate, the vitriol directed at Blockstream is absurd. Bitcoin is and always will be open source, and Blockstream’s business model depends entirely on the success of an open and decentralised Bitcoin. All the big names there have a proven track record of dedicating themselves to Bitcoin’s advancement. Their business model is to profit from their expertise, gained by valuable contributions to Bitcoin’s development. This is a sound and reasonable business model that has been successful on many other open source projects such as MySQL. The profit they make can be used to further advance Bitcoin – it is a win, win.

People literally believe that Blockstream is Evil Corp. I’ve seen people argue that Blockstream profits from keeping blocks small so they can charge for the lightning network. This demonstrates a shocking lack of comprehension and common sense. There are even conspiracy theories that Blockstream is a secret banking trojan horse to bring down Bitcoin from the inside. People peddling such misinformed nonsense need their heads inspecting.

Five years ago in response to scaling concerns, I used to argue that Bitcoin could scale infinitely on-chain, often citing Moore’s law. The more I learned about Bitcoin, the more I realised this isn’t viable without risking Bitcoin’s fundamental value proposition – decentralisation.

I have not been “brainwashed by Blockstream lies”, I have simply joined the consensus of those with a more informed technical understanding. With off-chain scaling we can have our decentralised, inexpensive and instant digital money cake, and eat it too. Sadly, we now live in a post truth world, and having the better argument is often trumped by those shouting the loudest.

Valid concerns can be raised about user experience, missed opportunities, and yes, Lightning Network and Sidechains aren’t ready yet and we do need solutions now. Well, guess what, we have a solution right now: SegWit will immediately ease the stress on the network, it is coded, extensively tested and ready to launch… and there is even consensus for a hard fork block size increase after its activation.

The only thing that will prevent SegWit from activating is misinformation combined with a political power grab by opportunistic miners.

There is now a movement, in the form of Bitcoin Unlimited, to hand over control of the blocksize to miners. There are many reasons why Bitcoin Unlimited is a terrible answer to the block size debate. Sadly, much of this discussion takes place in the bitcoin-dev mailing list where the brightest technical minds hang out, while the rest of the community indulges in misinformed squabbles on reddit. In short, handing over control of the block size to miners would be terribly centralising.

People arguing that the community wants a block size increase are right. I’m all for a block size increase too, however it is vitally important for the health of Bitcoin that the best technical solutions win and we do not concede to misinformation and fear. SegWit MUST be activated before a hard fork block size increase.

If the propaganda succeeds in persuading miners to fritter control of Bitcoin’s block size limits away to an implementation as poorly conceived as Bitcoin Unlimited, then that chain and those who created it must be punished by the market.

To do this, I propose Bitcoin 4Core, a hard fork response that would clearly support the scaling vision of Bitcoin Core, and hopefully recruit their talented development team.

I believe the best way to protect the network from attack and simultaneously improve decentralisation would be to introduce additional proofs of work. 4 proofs of work each with 40 minute block creation targets and respective difficulties. We could add Ethash, Scrypt and Equihash to give a mix of CPU and memory intensive methods, and improve diversity of hardware. We could also take the opportunity to introduce a 4MB maximum block size.

By using proof of work methods with existing altcoin implementations, the mining ecosystems already exist, though some altcoins would likely face severe disruption as miners fled to profit from Bitcoin. Existing Bitcoin miners also wouldn’t be shut out completely as with a change of PoW, and could reluctantly return with diminished income and influence when they a realise that the economic majority will overwhelmingly follow the technical majority when given a choice.

I don’t know if the Core developers would support a proposal like this, but I personally think it would be a great way to reclaim Bitcoin and give a clear mandate to the sound vision of the Core development team. This, however, should be a last resort, and I remain optimistic that SegWit can still activate despite all the noise.

People who argue that introducing SegWit as a soft fork is “too complicated” are concern trolling

Back in February I wrote a piece on then big block flavour of the month, Bitcoin Classic.

I was frustrated that the approach of rival implementations to Bitcoin Core was basically to lift most of the work of the core development team, make a few simple tweaks, and then try and push their implementation as the saviour of Bitcoin.

So I laid down a gauntlet, instead of being a cheap cover band, actually write some code that showcases your abilities and proves your worth. Do that, and a rival implementation could earn the respect and credibility essential to advance their agenda.

Segregated Witness (SegWit) is a clever way to almost double Bitcoin’s capacity without increasing the block size, while also solving other problems such as transaction malleability. It was widely agreed that SegWit was a win win.

Fast forward to now and SegWit has been developed, fully tested and is ready to be implemented as a soft fork.

Great news you would think, except if you go to the big blocker parts of the Internet, suddenly SegWit is considered dangerous!

The argument isn’t that SegWit is bad, it’s that it is way too complicated to be introduced as a soft fork, and should have been implemented as a hard fork. They also claim that the complexity of the code (over 500 lines), and compromises required as a soft fork will make Bitcoin really difficult to develop for in the future.

The developers at Bitcoin Core, who have delivered the solid dependability for which Bitcoin has become known, have collectively decided that SegWit was not just within their capabilities to write, but also to build upon in the future.

If any serious developer is arguing that Bitcoin is going to be too complicated for them after SegWit, they’re probably not a good enough developer that they should even be considering working on such a critical software. Anyone who isn’t a developer frankly needs to keep their concern to themselves, as they are not qualified to hold such a view.

I’d be a lot less harsh if SegWit had suddenly been announced and implemented under a shroud of secrecy. The Core developers said in December however, that SegWit would be coded as a soft fork.

If any rival team of developers disagreed with this approach they had a simple solution… write their own implementation of SegWit as a hard fork.

This would give them an opportunity to showcase their abilities, and give the community something to think about. If it was as simple and elegant as they say… they could have had the code ready months before Core, impressed us all with its elegance, and really built some momentum

What do we have instead? We have a small community determined to do everything in its power to block SegWit activation. It’s a shame that instead of sitting on the sidelines complaining, they didn’t take some initiative. They need to learn from this experience, as right now they just look like the petty children who have taken their ball and gone home because the game isn’t going their way.

photo credit: John Spooner Beware of the Troll via photopin (license)

The blocksize debate: is an end in sight for the civil war that has engulfed Bitcoin?

Depending on which parts of the Internet you inhabit, your perception of what’s happening in Bitcoin land can vary hugely.

The Bitcoin community is bitterly divided. For years now it has been split into two camps, those who think Bitcoin needs an urgent blocksize increase, and those that think other scaling approaches should be prioritised.

The “big blockers” are worried that with the current limit of mostly full 1MB blocks, there isn’t enough capacity for Bitcoin to grow. They think this will cause real harm to Bitcoin’s network effect, and that not addressing it urgently could result in Bitcoin losing its position and momentum as #1 cryptocurrency.

Whether you see merit in this view or not, it’s important to recognise that to somebody who is convinced that failing to urgently raise the blocksize could lead to Bitcoin’s downfall, the current standoff and ongoing lack of an increase would be incredibly frustrating. It is understandable that frustration and helplessness would lead to a deep seated suspicion and contempt for those they see as standing in their way.

For those of you that don’t visit the big blockers communities, it’s staggering to see the vitriol and anger directed at those “progress preventers” the Bitcoin Core developers, the team that has long served as custodians of the main Bitcoin implementation.

While big blocker communities can feel a little bit like the front line of a war, frequenting the “small blocker” parts of the Internet can feel a lot happier – you wouldn’t necessarily realise there even was war.

The thing is that everyone, big and small blockers alike, agree that Bitcoin needs to scale.

The Bitcoin Core team have identified a few interesting ideas that they believe are the best way to scale Bitcoin, primarily Segregated Witness (SegWit), and Lightning Network.

Lightning Network is not popular with big blockers. It aims to move transactions off chain, sending them directly between individuals rather than being stored by every participant on the network.

They are skeptical, arguing that it is hypothetical and unproven, and that even if it achieved everything claimed, it does nothing to address the scaling problems that Bitcoin is facing right now. Many also believe that these transactions taking place “off chain”, are undesirable and not part of Satoshi’s vision.

They contend that on chain scaling is an essential and easy fix that can be implemented immediately, and that Lightning Network is a distraction, causing Core developers to neglect more pressing issues.

I can understand these concerns, but I also see the merit in the approach taken by the Core developers. In summary, an increase in blocksize is a barrier to running a node and reduces decentralisation, a sacred and essential property of Bitcoin which, they contend, must be preserved as much as possible.

Middle ground is hard to find when the argument is so subjective. On one side, a $0.09 transaction fee is far too high and going to put off new users so Bitcoin never grows. On the other a $0.09 transaction is far too cheap to require that every full node, thousands now, possibly millions in the future, is required to store details of $2 coffee purchases for thousands of years to come – leading to a bloated chain that will suffocate under its own weight and jeopardise the highly prized property of decentralisation.

SegWit seems to be a middle ground. It works by splitting the data from transactions into two parts, half of which can be included in 1MB blocks, the other half stored separately and not contributing towards the block size limit, while improving a other areas of Bitcoin (like transaction malleability) as an added bonus.

This, the developers claim, will give an effective block size increase to around 1.7MB without requiring that everyone upgrade their software (a hard fork).

Great news, you would think, the big blockers and small blockers can both agree this is a win win for Bitcoin. Also, there’s no longer need to wait, SegWit is coded, tested and ready for implementation.

The thing is, to the surprise of those who don’t frequent the big blocker communities, the frustration and suspicion has grown so pernicious that SegWit is not trusted. They don’t believe it does enough to address Bitcoin’s urgent scaling problem, has taken too long, and will take too long to come into effect.

There is almost a sense that, in accepting SegWit, they will have “lost”, and that they still haven’t been listened to. Some even argue that introducing SegWit as a soft fork is more dangerous.

All this frustration and bad feeling has manifested itself in the rejection by the big blocker community of SegWit. They would rather block its implementation than “lose”.

You might think they’d be barmy to block something that is ready to increase Bitcoin’s capacity, but that is exactly the plan. They have lost complete confidence in Bitcoin Core and many would like to see a switch to a rival implementation, Bitcoin Unlimited, that would allow miners to decide the maximum block size instead.

There is a genuine belief that in blocking SegWit, they can force a stalemate that will enable them to push the community into choosing “their” scaling solution, and that they can still win the war.

If you’ve not passed by this community, this may sound absolutely outrageous. To everyone else, the war is almost over, but to those on the other side, battle has just commenced.

So, what happens now?

In order to be activated, SegWit requires 95% of miners to vote for its activation. Currently, mining pool ViaBTC has stated it will vote against SegWit, and since it has over 5% of hash power, it will succeed.

This leads to an interesting dynamic. To those outside the big block community, those that have most vocally demanded the network capacity increase are now the ones standing in its way. In a war of ideas, it’s hard to see that the big blockers are going to suddenly gain much new support when it looks like this.

How will the Core developers react? Well, I think they’ll patiently respect the 95% activation threshold.

It’s also interesting to note that a number of prominent Core developers signed an agreement in February about how to scale Bitcoin.

The agreement was that SegWit would be worked on as a priority, and the once finished the developers would take around 3 months to write code for a hard fork to increase the block size somewhere between 2-4MB.

They then went on to estimate that SegWit would be coded by April, and if that were the case the hard fork would be coded by July 2016. This is unfortunate, because this optimistic timescale has led to accusations that the Core developers had failed on their “promise” to code a hard fork by July.

Software often takes longer than hoped, but it is a shame this mention of July 2016 has led to some in the big block community feel like they have been betrayed and misled, when it was an estimate rather than a commitment.

If Core developers present had said SegWit would take until October 2016 instead of April 2016, it is possible that consensus may not have been agreed- and you could argue was agreed on false pretences. While I believe this was a genuine underestimation, I can understand why others already cynical would assume the worst.

So, what happens now? Well, SegWit will probably not activate, and the Core developers who signed that agreement will spend the next 3 months writing the code they promised for a hard fork – those present signed the agreement and their reputation now depends on it.

It would actually be good for the big blocker cause if the Core developers present reneged on the agreement, as they would be vindicated and would gain new support.

In the meantime, the big blockers will promote Bitcoin Unlimited, and despite their overwhelming optimism in the face of what to many looks like adversity, it will probably face the same fate as Bitcoin XT and Bitcoin Classic, similar attempts which failed before it.

Around 3 months from now we’ll possibly still be waiting for SegWit activation, but we’ll probably have code for a blocksize increase. The thing is, part of the agreement was that the code would not be implemented by Core until after SegWit had activated.

At that point, I feel the guns may fall silent, and the great Bitcoin war could finally reach its conclusion.

Angel: scaling Bitcoin through an Internet of Blockchains (IoB)

Bitcoin needs to scale and there are many contradicting ideas on how to achieve this.

Sidechains are an inevitable solution. They allow Bitcoins to be transferred from the main blockchain into external blockchains, of which there can be any number with radically different approaches.

In current thinking I have encountered, sidechains are isolated from each other. To move Bitcoin between them would involve a slow transfer back to the mainchain, and then out again to a different sidechain.

Angel is a protocol for addressable blockchains, all using a shared proof of work. It aims to be the Internet of Blockchains (IoB).

Instead of transferring Bitcoin into individual sidechains, you move them into Angel. The Angel blockchain sits at the top of of a tree of blockchains, each of which can have radically different properties, but are all able to transfer Bitcoin and data between each other using a standardised protocol.

Each blockchain has its own address, much like an IP address. The Angel blockchain acts as a registrar, a public record of every blockchain and its properties. Creating a blockchain is as simple as getting a createBlockchain transaction included in an Angel block, with details of parameters such as block creation time, block size limit, etc.

Mining in Angel uses a standardised format, creating hashes which allow all different blockchains to contribute to the same Angel proof of work. Miners must hash the address of the blockchain they are mining, and if they mine a hash of sufficient difficulty for that blockchain they are able to create a block.

Blockchains can have child blockchains, so a child of Angel might have the address aa9, and a child of aa9 might have the address aa9:e4d. The lower down the tree you go, the lower the security, but the lower the transaction fees. If a miner on a lower level produces a hash of sufficient difficulty, they can use it on any parents, up to and including the Angel blockchain, and claim fees on each.

There are so many conflicting visions for how to scale Bitcoin. Angel allows the free market to decide which approaches are successful, and for complementary blockchains with different use cases, such as privacy, or high transaction volume, to more seamlessly exist alongside each other.

I wrote this as a TLDR summary for a (still evolving) idea I had on the best approach to scale Bitcoin infinitely, for more detail please check out my previous article.

photo credit: Colonelbogey71 Angel of the North via photopin (license)

Introducing Buzz: a turing complete concept for scaling Bitcoin to infinity and beyond

In this article I will outline an idea I have had for an infinitely scalable, Turing complete, layer 2 scaling concept for Bitcoin. I’ll call it Buzz.

Buzz is influenced by a number of ideas, in particular Ethereum’s VM, sharding, tree chains, weak blocks and merge mining. I’ll be discussing Bitcoin, but the Buzz scaling concept could be implemented on any PoW blockchain, and without Turing completeness.

How does it work?

Buzz is merge mined with Bitcoin and has its own blockchain called Angel which serves as a gateway (through two way peg) between the two different systems. In addition to merge mined blocks, Angel has a second (lower) difficulty which enables more frequent block creation, say every 30 seconds (weak blocks).

Buzz lifts its Turing completeness from Ethereum, taking much of its development but with a few adaptations to enable infinite scaling.

Ethereum’s current plans for scaling is through proof of stake with 80 separate shards. All shards communicate through a single master shard. Each shard has up to 120 validators who are randomly allocated to a shard and vote with their stake to reach consensus on block creation.

Buzz is quite radically different in its approach. It depends upon PoW which is elegant, resilient and has no cap on the number of consensus participants or minimum stake requirements. It avoids a split into an arbitrary number of shards with an arbitrary number of validators, and where all shards are homogenised and lacking diversity, such as different block creation times for different use cases.

In Buzz, each shard is called a wing and has its own blockchain. There is no cap on the number of wings, anybody can create one. Instead of making design decisions, the free market decides whether a wing will succeed or not. If there’s a wing with high transaction volume and high transaction fees, it will attract a higher number of miners and a higher level of security.

Buzz is basically a blockchain tree, since each wing can have multiple child wings, and the Angel sits at the top of the tree overseeing everything and communicating with the ‘other world’ that is the main Bitcoin blockchain.

The higher up the tree you go, the higher the hashrate will be, since a hashrate for a wing is equal to the hashing power of mining on all descendant wings combined with the mining on that wing but no deeper. Data and coins can be transferred up and down the tree between the parent and child wings. How frequently this can occur depends on the difference in hashrate between the child and parent. Nodes can operate and mine on any wing, but are required to also maintain the blockchains of any parent wings, up to and including the Angel.

The process of creating a wing is as trivial as getting a create wing transaction included in an Angel block. As well as being a gateway, Angel also serves as a registry, a little like DNS, keeping track of the properties of all wings.

If you want to create a wing with a block size of 1000MB and creation time of 1 second, that’s no problem. The market will probably decide your blockchain isn’t viable and you’ll be the only node on it.

Each wing will have its own difficulty level. If it was created with a 5 minute block target, its difficulty will be determined through the same process existing blockchains use.

There are no new coins created in this system, the incentive to mine comes from transaction fees.

The Angel has the hashing power of the entire Buzz network. Transactions here are the most secure, but also the most expensive. Angel itself has a limited functionality and conservative block size since this is the only blockchain that every full node is required to process.

How are wings addressed?

Every wing has its own unique wingspace address, a little bit like IP addresses and subnets.

For example, a wing that is a direct child (tier 1) of the Angel might have been registered with the wingspace address a9e. It might have a tier 2 child wing at the wingspace address a93:33, which might have a tier 3 child wing at a93:33:1a, and so on. If you’re familiar with subnets, this is like 255.0.0, 255.255.0 and 255.255.255.

There may be thousands of active and widely used wings, or there may only be a handful of wings which have a huge transaction volume. The free market will decide the tradeoff between hardware requirements, hash rate and transaction fees.

There may be geographically focused wings to benefit from lower latency, and micropayment wings which become the defacto standard for day to day use in a particular country. Travelling may involve moving some coins to participate on the wing where local transactions take place. Such routing could be automated and seamless to the end user.

Instead of trying to second guess a one size fits all solution for blockchain scaling, Buzz takes inspiration from the approach taken by the internet, where an IP address could represent anything from server farm to a raspberry pi, depending on the use requirements. The idea is to just create a solid protocol which enables the consistent transfer of coins and data across the system.

Different types of activity can take place in wings more suited to their use case. The needs of the network now may be completely different in the future, and allowing all wings to have a number of definable parameters and to be optimised for particular use cases (such as storage, or gambling) allows the network to evolve over time.

How are blocks mined?

Miners must synchronise the blockchain of the wing they are mining, and all parent wings up to and including Angel. Nodes can fully synchronise and validate as many blockchains as they like, or operate as light clients for ones they use less frequently.

In order to mine, hashes and blocks are created with a different method to Bitcoin.

The data that is included to generate a hash must also include the wingspace address and a public key.

Instead of hashing a Merkle root to bind transaction data to a block, a public key is hashed. Once a hash of sufficient difficulty to create a block has been found, the transactions data will be added, and then the block contents signed by the private key that corresponds with the public key used to generate the hash.

By signing rather than hashing blockchain specific data, we enable a single hash to be used in multiple wings (as it is not tied to a particular wings’ transactions) as long as it meets the difficulty requirement for each parent.

Since transaction data is not committed to the hash as in Bitcoin (where it is as a result of hashing the Merkle root), there needs to be a disincentive to publishing multiple versions of a block using the same hash signed multiple times. This can be achieved by allowing miners to create punishment transactions which include signed block headers for an already redeemed hash. Doing so means that miner gets to claim the associated fees, and the miner who published multiple versions of a block is punished by losing the reward.

When generating a hash, miners must include the previous block hash and block number for all tiers of wings they are mining on. This will allow all parents wings to have a picture of the number of child wings and their hash power.

Hashing in the wingspace a9e:33:1a, means that if a hash of sufficient difficulty was found, the miner could use it to create a block in the wings a9e:33:1a, a9e:33 and a9e. If the difficulty was high enough to create a block in the Angel, it means that wingspace will effectively ‘check in’ with Angel, and provide useful data so its current current hash rate can be determined and provide an overview of the health of wings. If a wing has not mined a block of Angel level difficulty in x amount of time, the network might consider the wing to have ‘died’.

If you had 30 second blocks in the Angel, over 1 million a year, that means even a wing with just 1 millionth (0.0001%) of the network hashing power should be able to ‘check in’ annually.

It is likely that this check in data will enable miners to identify which wings are the most profitable to mine on, and the network will dynamically distribute hash power accordingly. There will be less need to mine as a pool, since there will be many wingspaces to mine in, which should enable even the smallest hashrate to create blocks and earn fees. Miners can mine in multiple wingspaces at the same time with a simple round robin of their hash power.

Creating and modifying wings

When a create wing transaction is included in an Angel block, the user can specify a number of characteristics for the wing, such as:

  • Wingspace address
  • Wing title and description
  • Initial difficulty
  • Block creation time
  • Difficulty readjustment rules
  • Block size limit
  • Permitted Opcodes
  • Permitted miners

Hashing a public key and signing blocks with the corresponding private key allows us to do something else a little bit different: permissioned wings.

Most wings will be permissionless as blocks can be mined by anybody. However let’s say a casino or MMOG wants to create its own wing. It might want to do this so that it can have properties of a faster block creation time and it can avoid transaction fees by processing transactions for free, since they are the only permitted miner.

By only allowing blocks signed by approved keys, permissioned wings cannot be 51% attacked, and could even mine at a difficulty so low it is hashed by a CPU while retaining complete security. Users will recognise that in transferring coins into a permissioned wing, there is a risk that withdrawal transactions will be ignored, though their coins would not be spendable by anyone else so there is little incentive for doing so. It is up to them to decide whether the benefits outweigh the risks.

The property of permissioned block creation could be used for new wings which are vulnerable to 51% attacks due to a low hashrate. Permitted miners could be added until the wing was thought to have matured to a point where it is more resilient, and the permission requirement could be removed.

Angel transactions can also be created to modify wing parameters. Say changing to a different block creation interval at x block in the future. The key used to create a wing can be used to sign these modification transactions. Once a wing has matured, the creator can sign a transaction that waves their right to alter the wing any further, and its attributes become permanent.

How is this incentivised?

Transactions have fees. By creating blocks miners claim those fees. If you mine a hash of difficulty to generate a hash all the way up to the Angel blockchain, you collect fees on each level for all blocks you created.

There exists the possibility to add a vig, say 20% of transaction fees, to be pooled and passed up to the parent wing. These fees would gradually work their way up to the Angel blockchain, and could be claimed as a reward for merge mined blocks with the main Bitcoin blockchain.

Other ideas

Data and coins can be passed up and down the wings using the same mechanism Ethereum is planning with its one parent shard and 80 children topology. However there is no mechanism to pass data sideways between sibling wings, as sibling wings are not aware of each other.

I wonder however, if wings could be created to accept hashes from multiple block spaces. For example wing BB might accept also hashes, at say a 10 minute difficulty, from the wingspaces for AA and CC.

It would be possible to calculate the required difficulty level because all parent wings up to the Angel must be synchronised, and information about sibling wings will be included in block headers on the parent wing. This potentially creates a mechanism where wings can pass data between each other more efficiently and at lower cost, though I think there may be technical limitations with this system, in particular for transferring coins. This is because as far as the parent is concerned, coins transferred directly from BB to CC seem to have appeared out of thin air when passed back from CC to the parent, as the parent wing cannot see the activities of its children.

Thoughts on Proof of Work

Wings cannot merge mine with the main Bitcoin network without a hard fork so that it accepts hashes in the Buzz format.

This hard fork would enable the PoW to be shared, and offer increased security for both systems. Maintaining a separate proof of work between the systems, however, presents the opportunity to diversify the options available, such as Scrypt, SHA-256, X11 and Ethash.

If you wanted 30 second blocks on Angel with 4 proofs of work methods you could give each proof of work method its own difficulty targeting a 120 second block time.

When creating a wing, particular proofs of work could then be specified for use on that wing. Such as 20% Ethash and 80% Scrypt. This would open up PoW methods to the free market too.

Summary

  • Buzz is merge mined and distinct from the main Bitcoin blockchain
  • Recognises that there are infinite potential use cases for blockchains, with varying design requirements.
  • It is impossible to expect every node to process every transaction. There needs to be segmentation.
  • Attempting a one size fits all approach to scaling will lead to suboptimal and restrictive designs for many use cases.
  • Buzz creates a system where different segments with different properties can exist side by side and transfer coins and data, facilitating a free market where the best blockchains can thrive.

Thanks for reading through my idea, I hope the swirl of ideas buzzing around my head make sense now they have been converted into words. For any questions, thoughts or criticisms, please head to the comments.

What on earth is a Merkle tree? Part 2: I get more technical, but hopefully all becomes clearer

I previously written about understanding what Merkle trees are. If you haven’t read it, go and do so now.

I tried to keep it non-technical, and a keen observer would point out that the article better explained the benefits of hashing rather than of Merkle trees. I was trying to explain why Bitcoin benefited from Merkle trees, rather than how they actually worked.

In my previous article, the gist was that hashing allows you to verify that large quantities of data haven’t been changed using a hash, a much smaller amount of data. Merkle trees basically allow you to verify that a particular piece of data was present and hasn’t been manipulated, using only a small number of proofs rather than having to download all the data to check for yourself.

This time, I’m going to have another go at explaining Merkle trees, with the assistance of something we can all relate to… colours. I’m going to create what I’ve called a Merkcolour tree (see what I did there).

A Bitcoin transaction hash is the unique identifier for each transaction. In a block there are is as much as 1MB worth of transaction data. You can hash all the transactions in a block into a single 256-bit hash. However in order to prove that a transaction existed in a block you would have to download all the data used to create the hash, generate your own hash to check it is accurate, and then check transaction you wanted to verify was present in the data.

This would make it possible to store a copy of the 256-bit block hashes without having to store all the transaction data (upto 1MB) for each block – the down side is that in order to check a transaction is present you have to download all the transactions data instead of a small number of proofs, which is much more quicker and efficient as Merkle trees enable.

So how do Merkle trees work?

Well, the unique Bitcoin transaction id, which is actually a hash created by hashing the transaction information together, looks like this:
cf92a7990dbae2a503184d6926be77fc85e9a9275f4222064ee78eeb251d36b2

And if you combine (concatenate) two transaction ids back to back:
cf92a7990dbae2a503184d6926be77fc85e9a9275f4222064ee78eeb2 51d36b2d8f4744017dc79f8df24e2dba7fd28e5fd148c3b01b5f76dede8ef3ac4e5c340

That combined data can be hashed together (using the SHA-256 method) into the following hash:
474a1d00a80dd927ba87404371c11c7db24bc58b0a712ffacdb09a47dc1bec89

Instead 512-bits of data for both transactions, the hash is 256-bits of data.

Now lets adapt this logic to colours. Each colour is represented in the same hexadecimal format as a hash, it’s just that a 256-bit hash is more than 10 times longer than a 24-bit colour.

As an example, red is #ff0000, blue is #0000ff.

If we combine those colours together we get #800080.

Instead of 48-bits of data for both colours, the combined colour is 24-bits of data.

A Merkle tree is basically a process by which pairs of hashes are merged together, until you end up with just one, the root. This is best demonstrated with colours in the image below (click it to open).

Merkcolour tree

In the image, we start with 16 different colours (labelled A to P) – these are the leaves. Each colour has been paired with a neighbour and combined together to create a branch. This process is repeated as many times as necessary until you end up with one final colour – the root.

Now, the colour (leaf) we’ve labelled I in the diagram is #ff0000.

If, instead of creating a tree, I simply hashed all the colours together the outcome would be as follows:
#000064#007777#007700#777777 #1f1fff#5cffff#47ff48#ffffff #ff0000#770000#f07000#614600 #ff21b5#ff1f1f#ffff00#d7c880

Hashes (using the MD5 method) to:
8c4878686b62656fbe81d50c3a832728

If I provided you with that hash, and told you it included the colour #ff0000, the only way I can prove it is by sending you all 16 colours in that order (384-bits of data, removing the #) so you can generate and confirm the accuracy of the hash for yourself.

However, because we’ve created a tree, if you know the root is #8e7560 (a product of all leaves – ABCDEFGHIJKLMNOP), we can confirm that #ff0000 (I) was included using only 4 proofs:

  1. #489194 (ABCDEFGH)
  2. #f58255 (MNOP)
  3. #a95b00 (KL)
  4. #770000 (J)

Let’s start at the top and work our way down to the root:

#ff0000 (I) (we want to confirm) when combined with 4) #770000 (J) gives:
#bb0000 (IJ) which combined with 3) #a95b00 (KL) gives:
#b22e00 (IJKL) which combined with 2) #f58255 (MNOP) gives:
#d4582b (IJKLMNOP) which combined with 1) #489194 (ABCDEFGH) gives:
#8e7560 – which is the correct root!

If we ended up with any other value than the root, we know some of the data we have been given is inaccurate and cannot be trusted.

This means that instead of hashing all the colours together and needing to download 384-bits of data to confirm its accuracy, we are able to download just 4 proofs, or 96-bits of data.

The efficiency gets even bigger the more leaves you have, as each time you double the data (number of leaves), you only add one additional branch which is one extra 24-bit proof for colours, or 256-bit proof for hashes.

For example here’s how much proof data is required to verify the following:
32 colours (768-bits): 5-proofs (120-bits, 84% efficiency)
64 colours (1536-bits): 6-proofs (144-bits, 91% efficiency)
128 colours (3072-bits): 7-proofs (168-bits, 95% efficiency)
256 colours (6144-bits): 8-proofs (192-bits, 97% efficiency)
512 colours (12288-bits): 9-proofs (216-bits, 98% efficiency)
1024 colours (24576-bits): 10-proofs (240-bits, 99% efficiency)

A key distinction of hashes and colours is that hashes are one-way and unpredictable. That means you cannot work out what two hashes were combined to create a hash. The opposite is true for colours, if you know a colour it is possible to work out exactly what combinations of colours could have created it. The Merkcolour tree is only useful for visually demonstrating the concept, if you could reverse engineer a hash in the same way you can a colour it would not be reliable.

Hopefully this makes sense! The key take home message is that instead of having to download an entire 1MB block to confirm that a transaction was in it, you’re able to download a small number of proofs for validation. This makes it a lot easier to verify transactions without having to keep a copy of the blockchain or download large quantities of data, for example on devices such as smartphones where this is less practical.

Merkle trees make the process of verifying data hugely efficient, and while Bitcoin could exist without them, it would require an awful lot more resources such as processing, bandwidth and storage – and running clients on mobile devices would be far less viable.

Thank you Ralph Merkle.

Unintended consequences: Could proof of stake just become no proof of work?

Bitcoin operates through a process known as proof of work (PoW). In order to determine which network participant gets to create the next block (and claim a reward), the process requires the contribution of computer processing power. The more processing (work) you perform, the more likely you are to be rewarded with Bitcoins.

Running this hardware is very expensive, the Bitcoin network is already said to consume as much electricity as the entire country of Ireland.

Satoshi Nakamoto’s vision when he created Bitcoin was that everybody would mine Bitcoin on their computers, all around the world, and that this would decentralise the network.

Unfortunately, CPUs are incredibly inefficient miners. A decent laptop might manage around 14MH/s. A specially designed (ASIC based) AntMiner S9 can achieve 14TH/s – that’s 1,000,000x faster.

Nakamoto could not have foreseen the rise of ASICs when he wrote the Bitcoin white paper. Consequently, instead of being distributed around the world, Bitcoin has faced huge centralising pressure. The number of people required to control Bitcoin can fit around one table. Centralisation provides self perpetuating benefits of easier access to the best hardware and cheapest electricity, though once ASIC chips bump up against Moore’s Law there’s good reason to believe we will see a shift back towards decentralisation.

The holy grail of cryptocurrency would be the security of proof of work, but without the cost and centralisation. I first read about proof of stake (PoS) a number of years ago and, seduced by the idea, immediately invested in PeerCoin, the first cryptocurrency to implement it.

So what is proof of stake?

PoW uses expensive and ‘wasteful’ electricity to try and calculate a hash of sufficient difficulty for the network to accept – enabling that participant to create a new block.

PoS works the other way around. There are a number of proposals, but the basic principle is that each participant can ‘stake’ their coins to create a kernel (type of hash). The bigger the stake, the bigger the chance their kernel will ‘match’. Match what? Well, the blockchain itself generates a random and unpredictable seed based on the data in the proceeding blocks (also by hashing), and the closest matching kernel gets permission to create the next block, and is rewarded for doing so.

As there is no requirement to lock up computer processing power, everybody can run the software on their own machine without the expense and hardware requirements of PoW.

Sounds great, doesn’t it? Well, as with the unintended consequences of PoW, let’s try and foresee how the PoS landscape might evolve.

Under PoW we have seen the rise of pooled mining. Pooled mining has been wildly popular because it makes mining income more predictable.

Think of PoW like a lottery. The more processing power you contribute, the more tickets you get. In Bitcoin there is just one winner every 10 minutes.

If the current bitcoin difficulty didn’t increase, even with the most efficient miner – the 14TH/s Antminer S9, you’d have to enter this lottery for over 2 years on average to win just once.

If you join a pool that has 25% of the hashing power (lottery tickets), then you can expect that pool to win once every 40 minutes on average, and you can then regularly collect your share of the winnings. This is favourable as opposed to running your hardware for years in the uncertain (unlikely) hope of winning the jackpot. Pooled mining in PoW offers no other benefit than making your income more predictable.

Would the same be true of PoS?

There has been testing in PoS experiments that has gotten the block creation time down to 3 seconds per block. This means instead of having 52,560 lottery winners per year in Bitcoin, you could have 7.6 million winners each year. This would certainly reduce, though not eliminate, the appeal of mining pools.

However, in cryptoeconomics we must assume that each participant will always act in their own self interest. Could there be other benefits from PoS pooled mining that are not present in PoW?

In digital security, randomness is very valuable. In PoW the randomness that selects the next block is generated by an external source – all that hardware calculating trillions of random hashes. In PoS this necessary randomness does not come from an external source, it can only come directly from the blockchain itself.

This means a seed generated from previous blocks is used to determine which participant will create the next block.

There are two different data sources you can hash for this randomness. If you included all the contents of the block to generate a hash, this would be a disaster, since there are infinite combinations of block contents. If it was an individual’s turn to create the next block and they had sufficient hardware they would just crunch as many combinations of block contents as possible and hopefully find one that generates a seed matching a kernel they control, allowing them to create the next block and repeat the process again.

This ‘stake grinding’ wouldn’t represent a shift away from proof of work, it would just mean work has taken place but without any proof or transparency.

An alternative option is to only hash header information which cannot be manipulated, such as the block creator’s signature. A potential issue here this is that if you pooled together, you could gain a competitive advantage.

Imagine you’re in a pool with 30% of the staked coins. This should mean that your pool creates 30% of the new blocks. However, let’s speculate an instance where the seed to determine the next block has two pool members as the two closest matches. Imagine the closest match signing a block would create a hash that would allow the next block to be created by a non-pool member, whereas the 2nd closest match would allow the next block to be created by another pool member. If you had sufficient hardware the pool could work to rapidly calculate the best combination of block creators to maximise revenue for the pool.

You can try to mitigate this risk by punishing participants for not creating a block when it’s their turn, but getting the economic balance right to not overly punish people with less reliable Internet connections for example (another centralising pressure) strikes me as an unenviable task.

Ultimately, if the pool has the size and hardware resources to crunch the numbers far enough ahead – it’s still going to game the system when it calculates a combination that will likely generate 10 consecutive blocks, compensating those members who lost out in the process for the greater benefit of the pool.

Such a system could actively incentivise centralisation. The bigger the pool, the greater the advantage. It could create a race to the bottom, since while everyone may recognise this centralisation as undesirable, they also must make an economic sacrifice to avoid participating in it.

Perhaps this centralisation pressure and obscuring of work would be an unintended consequence of PoS. All I know is, the more I study PoS and its goal to provide the security of PoW without the cost, the more a phrase from growing up in Yorkshire comes to mind… “you don’t get owt for nowt”. In other words: there’s no such thing as a free lunch.