Could we increase the block size limit with a soft fork? Actually yes, we can… but should we?

In my last article, I asked whether there was a third way to increase the block size limit that wasn’t quite a hard fork and wasn’t quite a soft fork.

I think its important to follow that up with a critique, there are two sides to every story.

It has been pointed out that what was proposed was technically a type of soft fork, so I will refer to it as such. This idea has been discussed on the bitcoin-dev mailing list and while on the face of it, it’s an appealing idea, perhaps even a no-brainer, things aren’t as straightforward as they may seem.

There is technically no reason increasing the block size through a soft fork cannot be achieved by simply creating new larger blocks that are invisible to old clients. These clients are consequently still connected to the same network but not able to participate fully. This would only require a majority of mining power upgrade to be successful.

Its very easy as somebody who doesn’t have to get their hands dirty with the code to sit back and suggest that a solution like this seems obvious. Why would we risk hard forking when we can do this instead?

In reality there are a lot of things we can do in coding, the question to be asked is whether something is worthwhile.

I once built an eLearning course where the client required a progress indicator in the corner of the screen. This was easy to implement by simply positioning an image along an axis. The client, however, decided they would prefer a gauge with an indicator working its way along an elliptical curve. The project manager, with no coding experience, decided this would not be a problem and approved the change. They had only envisaged a couple of graphics changes.

What had been a simple bit of code became a far more complex mathematical procedure. I had to dust off my old trigonometry text books and, after a frustrating morning, ended up with a hacky bit of code which got the job done.

What had been gained? Very little. The graphic did exactly the same job, albeit moving along a slightly different path. Was it worth it? Absolutely not.

One problem of increasing the block size as a soft fork is that it’s hacky. We’re building a protocol that we hope will still be going strong in 100 years time and beyond. Everything we do now is irreversible, forever locked in to the protocol and requires thoughtful contemplation.

If we implement everything as a hacky soft fork, it requires a compromise in the code. Does a piece of software that has ambitions of being used and recognised as the most important tools of the technological age want to become littered with clunky IF statements?

There’s a trade off to be made though. Isn’t a soft fork still ‘safer’ than a hard fork? Isn’t the primary objective of Bitcoin to be secure? The average user will never see a line of its code in their life, what does it matter if its a little hacky as long as it works?

What are the consequences?

Let’s examine some undesirable consequences of a hard fork. Many argue a soft fork would prevent people from losing Bitcoins. The argument goes that hard forks risk splitting the network in two, with people transacting on both networks following a split. Merchants who receive a payment on the “losing” side of the fork may, for example, ship out an item, only to later realise that the network they received the payment on has been abandoned, and they never actually received payment on the winning network, leaving them out of pocket.

This happening would undermine confidence in Bitcoin. It would easily translate to damaging headlines “people lose money as Bitcoin splits in two”. We have to ask though, would this really happen?

This scenario presumes that Bitcoin has split relatively evenly, and is conflicted about which side of the fork will “win”. In a worst case scenario there would be two networks with 50% of mining power each. With a halving of mining power for each network, blocks would initially be created around every 20 minutes – the capacity of the network would temporarily be cut in half. A 50/50 split in mining capacity has no connection to which side of the fork the majority of users are on. Interestingly, transactions would still be sent around the entire network, and the majority would probably find their way into both sides of the fork, it would take an effort by an attacker to get a transaction included in one network but not the other, so there is some sort of built in protection from such a loss on a wider scale.

That scenario assumes a 50/50 split of mining power. Any hard fork that only requires 50% of mining support would be reckless and unlikely to achieve consensus.

Far more reasonable would be a requirement of 95% mining consensus before implementing a hard fork. In this instance, the network would split in two, but on the “losing” side of the fork, the loss of mining capacity would substantially extend the time it takes to mine a block, from 10 minutes to around 2 hours. It’s reasonable to imagine that any miners still on the old network would very quickly upgrade as otherwise they are wasting a lot of money. This would cause the old network to grind to a complete halt. Its quite possible that such a sudden loss of mining capacity could result in the old network never successfully mining another block again, ever.

In this scenario, it looks far less likely that anyone on the old network would lose money as long as they are sensible about waiting for transactions to confirm, which they never would. Effectively the outcome is not dissimilar to what we are trying to achieve through raising the block size through a soft fork, the difference is instead of possibly degrading a little more gracefully, their old software would essentially stop functioning until they upgraded.

When or even if we should increase the block size is a completely different discussion I will have another time. The case against increasing the block size through a soft fork, I now feel is strong. Hacky code has its place, but Bitcoin can do better.

Perhaps instead of adding one off hacks to the code each time we want to raise the block size, we can code a soft fork that is re-useable in the future. Instead of adding manually nested IFs to work our way through various block sizes we have preserved, we can have an array of block sizes and their activation points. This would have the effect of making block size expansion as simple as adding an item to an array, and each time we increase the block size we can enjoy the benefits of preserving some legacy function and avoid a hard fork. But then again, this is easy for me to suggest, as I’m not the one coding it.

I’ve recently started blogging about Bitcoin. If you liked this article please check out my others. Any questions or suggestions, let me know in the comments. Tips welcome: 1H2zNWjxkaVeeE3yX6uVqng5Qoi6gGvYTE

Follow me

John Hardy

Software developer living in UK.
Longtime Bitcoin advocate.
Email [email protected]
Donations welcome: 1H2zNWjxkaVeeE3yX6uVqng5Qoi6gGvYTE
John Hardy
Follow me

Author: John Hardy

Software developer living in UK. Longtime Bitcoin advocate. Email [email protected] Donations welcome: 1H2zNWjxkaVeeE3yX6uVqng5Qoi6gGvYTE