Here’s how Craig Wright probably tricked Gavin Andresen

What an exciting and dramatic day for Bitcoin.

I woke this morning to my girlfriend asking if I had seen the news that “Satoshi Nakamoto had been uncovered as that Craig Wright guy”.

My initial reaction was scepticism, in my mind he was a scammer, definitely not Satoshi. There it was however, on the trusty BBC home page, with the promise of proof. The proof, however, was elusive.

A quick trip to Craig Wright’s blog and I came away more confused. Had he proven he was Satoshi? Then I encountered Gavin Andresen’s blog where he verified that Craig Wright had signed a message of Andreson’s choosing with a private key known to be Satoshi’s – this looked like case closed.

The problem is, as the day progressed, all the other evidence crumbled under scrutiny. The one shred that retained any credibility was Andreson’s account. How could this proof have been faked?

Let’s find out what happened. In his blog Andresen says:

I witnessed the keys signed and then verified on a clean computer that could not have been tampered with

Only a person with a private key can ‘sign’ a message. Once a message is signed, people can use software to check that the signature is genuine and was created by someone in possession of the private key.

With open source software, anybody can download the source code themselves. This makes it incredibly easy to make small modifications to the otherwise identical software.

It would be quite trivial to find the bit of code that verifies whether a signature is valid and then change the word invalid to valid. Depending on the software it could literally be as easy as deleting the proceeding letters IN.

The modified software would then say that every signature tested was valid, regardless of whether it was or not.

This is the reason the “clean computer” is relevant. If I invite you to view my computer where I show you a validation, I could easily have modified the software. If we go to a shop and buy a brand new computer and then download fresh software, that would eliminate this risk.

This is Andresen’s account of what happened in a post on Reddit:

Craig signed a message that I chose (“Gavin’s favorite number is eleven. CSW” if I recall correctly) using the private key from block number 1.

That signature was copied on to a clean usb stick I brought with me to London, and then validated on a brand-new laptop with a freshly downloaded copy of electrum.

I was not allowed to keep the message or laptop (fear it would leak before Official Announcement).

I don’t have an explanation for the funky OpenSSL procedure in his blog post.

As far as we can tell, Andresen bought a new USB stick. This stick was put into Wright’s computer and a file was copied over containing the signature.

This USB stick was then put inside a brand new laptop.

A remote possibility is that Wright’s computer secretly copied files to the USB stick, files which were then transferred to the new laptop and ran behind the scenes to modify the freshly downloaded Electrum software. This seems unlikely though.

All the scenarios involve Wright somehow running a modified version of Electrum, but another remote possibility is that he somehow discovered a bug in the code that allows you to trick the software into displaying a valid message for an invalid signature. Again, this is unlikely.

Let’s look for more clues, this time from a Wired article:

Andresen says an administrative assistant working with Wright left to buy a computer from a nearby store, and returned with what Andresen describes as a Windows laptop in a “factory-sealed” box. They installed the Bitcoin software Electrum on that machine. For their test, Andresen chose the message “Gavin’s favorite number is eleven.” Wright added his initials, “CSW,” and signed the message on his own computer. Then he put the signed message on a USB stick belonging to Andresen and they transferred it to the new laptop, where Andresen checked the signature.

At first, the Electrum software’s verification of the signature mysteriously failed. But then Andresen noticed that they’d accidentally left off Wright’s initials from the message they were testing, and checked again: The signature was valid.

“It’s certainly possible I was bamboozled,” Andresen says. “I could spin stories of how they hacked the hotel Wi-fi so that the insecure connection gave us a bad version of the software. But that just seems incredibly unlikely. It seems the simpler explanation is that this person is Satoshi.”

There’s a bit of a smoking gun here. A factory seal doesn’t prove something hasn’t been tampered with any more than writing ‘this is genuine’ on a CD makes it genuine. Instead of buying a laptop himself, he allowed one of Wright’s representatives to source the laptop. This means the laptop can no longer be considered ‘clean’. It could have been preloaded with modified software, either to trick the computer into downloading a modified version of Electrum, or by modifying a legitimately downloaded version of Electrum during or after installation.

As Andresen mentions himself, it is also possible the Wifi connection was compromised to point to a different download location, in which case even an clean computer could be compromised.

Either way, a major weakness of Andresen’s is that it sounds like he already was convinced of Wright’s story before he arrived and was the victim of a confidence trick. This means he may have let his guard down in permitting one of Wright’s associates to source the ‘clean’ machine, or in his verification of the legitimacy of the software installed. It is possible to verify a software has not been modified by checking the MD5 checksum, it would be interesting to know if Andresen performed this test. It is also very suspect that Wright insisted on keeping the laptop and USB stick without a compelling reason after the demonstration as that would have allowed Andresen to verify the test.

There are other possibilities too. Andresen may have not witnessed any of this and may be in on the scam, or acting under duress. Another unlikely possibility is that Craig Wright is Satoshi Nakamoto.

As Gavin Andresen says himself, the simpler explanation is often the most likely, and in this case it seems most likely he was bamboozled by a world class con artist.

If Craig Wright is proven as Satoshi beyond reasonable doubt, then I’m going to be unreasonable

I’m conflicted. Craig Wright has unveiled himself as the mysterious creator of Bitcoin. Something about this story just doesn’t sit right with me.

I can’t work out whether I just don’t want it to be true, or whether something genuinely doesn’t add up. Gavin Andresen is far more informed than me, and has been in contact with both Satoshi and Wright – that he is convinced of Wright’s authenticity should be enough to satisfy me.

I don’t know what my vision of Satoshi Nakamoto is, I just know that Craig Wright is not my Satoshi. It’s easy to get carried away and forget that whoever created Bitcoin is almost certainly a fallible human being who can never live up to the weight of expectations that have been projected upon them.

There’s a difference between being fallible however, and just dodgy. Some things about Wright’s story strike me as the latter. He claims to own the world’s 17th most powerful supercomputer, but there is no evidence of this and the manufacturer denies selling it to him. He also faked having a PHD on his LinkedIn profile and is having his tax affairs scrutinised in Australia.

He’s publicly declaring himself as Satoshi Nakamoto to end speculation… while making it incredibly difficult to independently verify his claim, under the convenient excuse that he likes “being difficult”. When Wright demonstrated his proof to journalists, it is not impossible that he altered the software to give the appearance of validating something that is actually invalid. This is why independent verification is important, something that my Satoshi would have valued.

The problem is, I don’t know what standard of proof I require. Even if Wright demonstrated possession of private keys known to be held by Satoshi that I could verify for myself, I’d still want to know the role of others who ‘helped’ him create Bitcoin, notably Hal Finney and Dave Kleiman who are now sadly deceased and unable to confirm his version of events. Perhaps I like the mystery so much I’d perform the mental gymnastics required to, in my mind at least, keep it alive.

I may well come around to the idea that Craig Wright is Satoshi Nakamoto, and if I can let go of my doubts, I want to give Wright the credit he deserves and celebrate his gift. Until then I remain sceptical and, quite possibly, unreasonable.

Will full blocks really be bad news for Bitcoin?

The idea that full blocks, or a ‘fee event’ will be bad news for Bitcoin is based on a couple of assumptions. First, that Bitcoin is used as a currency, and second, that those using it as such will stop doing so.

This is just wrong. Bitcoin is young and evolving, with only a minority of transactions used for purchases.

Bitcoin is primarily a commodity and store of value at this stage of its existence. Full blocks have little impact on this role.

Leicester City Football Club are currently leading the English Premier League. They are reaching capacity every game with demand so high that £50 tickets are touted at £15,000 for a pair, a major ‘fee event’.

No journalist would frame this as bad news for Leicester City. It is an achievement, a direct consequence of their success.

Despite this, higher capacity would enable greater sales instead of ‘missed opportunities’. On 7th May, as the full time whistle blows, these opportunities have passed forever.

This is not the case for Bitcoin. As a commodity, rather than an event, Bitcoin will increase in value as demand increases. There is no deadline at which its value drops to zero.

For Bitcoin to go from less than 8,000 transactions per day to over 200,000 in 4 years is a success story. Full blocks are a newsworthy event, a celebration of what Bitcoin has achieved. “Product in such high demand that nobody wants it any more” read no headline, ever.

Any news stories on reaching capacity are good. The real story is a 4 year use increase of 2500% and major capacity increases on the way. The future is bright.

 
photo credit: 20,000 Miles via photopin (license)

Should the banking world fear Bitcoin?

A lot of people think that Bitcoin is a threat to banking, but is it?

Bitcoin does eliminate the need to use a bank, but so does storing all your money in cash. Obviously stuffing a bundle of notes under your mattress is impractical, and all payments must be made in person unless you want to send money through the postal service (not recommended).

So yes, on the face of it, Bitcoin is a threat to banks.

In many other ways though, it isn’t.

One of Bitcoin’s biggest advantages is the ability to transfer money between different accounts for a very low cost. In Europe however, the banks already allow you do this free of charge. I regularly make transfers between accounts that are completed and spendable immediately. Clearly, banks are not making much money out of transferring money. At far greater risk from Bitcoin in this regard are money transfer services like PayPal, Visa, MasterCard, Western Union and MoneyGram.

So how do banks make their money? Well, we have the fractional reserve banking system to thank for that. When you put your money in a bank account, the banks don’t just leave that money sitting there waiting for you to withdraw it. Instead, they go out and sell debt. Credit cards, mortgages, overdrafts, loans… guess whose money the bank is handing over when it provides these products – yours. Obviously the bank leaves some money aside (a fraction as a reserve) to provide a cushion, so that when you want to go and withdraw your money it doesn’t have to say “sorry, we’ve loaned your money to Ms Jones for her new car, you’ll have to come back when she’s repaid”. Although obviously in this system, it can all fall apart if everybody tries to withdraw their cash all at once and the bank cannot get hold of enough money to pay.

When done properly, fractional reserve banking is actually a pretty useful system. Catastrophic problems have historically arisen when bankers have gotten greedy and over stretched themselves. When the banks then run out of money (as in 2008), the governments have stepped in and given them money. How did the government generate this money? By printing more of it!

What does this mean for Bitcoin?

Bitcoin would not mean the end of fractional reserve banking.

Everything the banks can currently do, they could also do with Bitcoin. You would hand over your Bitcoin to the bank, and they would look after it and loan it out to other people.

For any transition away from traditional finance to Bitcoin, It is actually really important that this continues to happen. Imagine trying to buy a house, except you’re suddenly not allowed to get a mortgage and you have to save up the entire cost of the house upfront: there are many situations where debt can be useful.

A transition to Bitcoin will create new challenges for banks. One of those is incentive. One of the incentives to using a bank account is that you trust the bank to look after your money. Having money in a bank account is safer than having it hidden under a mattress. It’s also an awful lot more convenient.

Bitcoin can replace the role of trust and convenience; in countries where basic banking services are currently ‘free’, you have less reason use a bank account. In countries where you currently pay for basic banking services, Bitcoin is a complete no brainer.

The consequence for this is that banks will have to start competing for your business in other ways – they need your money otherwise they can’t lend it to anyone else. Basic banking services have previously been a major incentive, one that people have been prepared to pay for. Instead, in order to compete, banks will have to find new incentives – this could include features such as insuring your Bitcoin from theft. People are always willing to pay for security, and looking after Bitcoins yourself may feel a little bit like keeping your money under the mattress to a non-technical person.

Another incentive banks can provide that Bitcoins cannot is interest. Many bank accounts already pay interest; the amount they pay is likely to need to increase in order to retain customers. Currently, if you receive 1% interest and a bank goes and loans out your money for 6%, the bank makes 5% profit. Simple right? I have a feeling banks will need to offer a much higher level of interest to attract customers than they currently do.

So the banks will have to adapt to compete, but they’re safe right?

No. Remember the problem of greedy bankers? Well, that hasn’t gone away. In fact, it’s become an even bigger problem. Banks will have to be more cautious to not over stretch themselves – if they cannot rely on the government to print more money and save them (impossible with Bitcoin), the only option they have if they run out of money is bankruptcy!

There are many fascinating scenarios that could occur in this situation, in order to protect fractional reserve banking, it may be in other bank’s interest to bail out a bankrupt bank. Why? A grave threat to fractional reserve banking in a Bitcoin world is people losing trust in the entire system. If a bank offers you 100% interest but you’re scared they’re going to go bankrupt and lose every penny (or bit) you save with them, you’re going to choose the trust and security of Bitcoin.

Unless the banks become super cautious, they’re going to start (well, continue) to fail. The worst part for even the most responsible of banks is that they’re all linked, and they’re only as strong as the weakest. If just one major bank gets itself into trouble and loses its customers their money – customers of other banks are going to get scared. If these customers lose trust in banking and start to panic withdraw their money en masse (and secure it in Bitcoin), the entire fractional reserve banking system could quickly and spectacularly collapse.

Can a leopard change its spots?

Widespread Bitcoin adoption will be hugely disruptive to the banking industry for the reasons outlined above, but Bitcoin alone alone will not spell the end for banking. Those banks that are keen to adapt and evolve can compete in a Bitcoin world. Those that don’t will find themselves going the way of Kodak.

We’ll likely see a shift towards greater transparency: instead of trusting banks (and the government to step in if they fail), customers will want independent auditors to assess how risky the bank is being with their money. Customers could choose to receive a lower rate of interest in exchange for a lower rate of risk.

So Bitcoin alone is not a death knell for the banking industry, but that does not mean they are safe.

Bitcoin isn’t the only technology that threatens to disrupt the traditional banking model. If you want to save money in a bank, you agree on a rate of interest and leave them to it. If they give you 3% and manage to earn 8%, they’ve made 5%.

Another breakthrough technology that is emerging is social lending. We no longer need to ask the banks to match up lenders and borrowers – we can already do it ourselves. In the UK I have used Zopa and RateSetter, they act as an exchange for people who want loans and people who have savings. They take a service charge of around 1%, and match lenders with borrowers – the rate is determined by supply and demand. Instead of receiving 3% interest and paying 8% with a traditional bank, if both parties had used an exchange, they meet in the middle. The lender receives 5%, the borrower pays 6%, the service charge is covered and the banks get nothing.

While Bitcoin alone does not signal the end of banking as we know it, the challenges the banks face are vast and unenviable. One thing is certain, technology will march on whether they embrace it or not.

Vitalik’s Dilemma: Why no other open crypto-currency could supplant Bitcoin

Bitcoin is revolutionary. It also has many advantages over all other cryptocurrencies.

It is the cryptocurrency establishment. A brand which is recognised around the world. Its’ ecosystem has magnitudes more investment than all other cryptocurrencies combined. It has an abundance of talent, is the most secure and has more computing power than the worlds top 500 super computers combined; to list just some of the first mover advantages afforded to it.

The odds stacked against any rival would seem insurmountable.

A flagship feature of Bitcoin is that it is consensus driven.

There is no technical reason why any improvement or feature adopted by a rival could not be also adopted by Bitcoin. We’re already seeing an example of that where would-be challenger Ethereum, an innovative and impressive technology in its own right, is having its code adapted for use on the Bitcoin network in the form of Rootstock, where it also benefits from Bitcoin’s established infrastructure and security.

Any Bitcoin alternative that clearly offers an improvement will see that improvement adopted by Bitcoin, as consensus will be easily achieved. However, let’s consider the unimaginable… some rival manages to gain momentum and end Bitcoin’s reign as the king of cryptocurrency.

If that rival is private, say created by a consortium of banks who manage to create useful features impossible under a decentralised, consensus based system, it’s game over. Bitcoin has lost, the centralised currency has won, normal service is resumed, status quo is restored. You get the picture.

If the rival is another open-source, consensus based cryptocurrency, things get a little more interesting.

The dilemma is that by replacing one established consensus based cryptocurrency with another, the entire experiment becomes a failure. It is mutually assured destruction.

Any belief in the idea that a consensus driven based cryptocurrency can succeed is over. If it happened to Bitcoin, what’s to stop it happening to its successor? Who is going to put their faith and money in a system that may be gone itself as soon as a new, trendier replacement emerges?

 

photo credit: You Choose Your Path via photopin (license)

Understand the Bitcoin Lightning Network with a simple experiment you can try for yourself

Bitcoin, in its current form, does not scale.

Currently, every single transaction is stored by every single participant on the network.

Imagine if that’s how the Internet worked. Instead of going and getting only the information you needed, you had to download a copy of the entire Internet to your computer. It just wouldn’t work.

Instead of keeping all the transactions on the blockchain, Lightning Network plans to move most of them elsewhere, so they are only stored by those who need them. This would allow it to scale near-infinitely.

It does this by routing payments through channels.

I’ve seen people speculating about using Lightning Network with multiple channels, one to pay for their coffee, another to pay for clothing, depending on which channel will likely be “quickest”. This reminds me of when I first learned about email in the 90s, people were discussing how it could be sent between computers “in minutes”.

In reality, data can make its way entirely around the world in a fraction of a second, at speeds not far behind the speed of light.

You can test this theory yourself. I live in the UK, on the opposite side of the world is Australia. Go to the command prompt and type:
“ping 139.130.4.5”

That is the IP of a computer located in Australia. The ping command tells my computer to send a ‘hello’ message to that computer, and then tells me the time it took to receive a reply.

The speed I am given is around 360ms. Just over 1/3 of a second (1000ms).

From where I am in the UK to Sydney, Australia is 10,645 miles. That means in around a third of a second, that ping request has travelled 21,290 miles around the world at a staggering 59,139 miles per second.

The speed of light however, is 186,282 miles per second. So, relatively speaking, why so slow?

The answer is that the request had to be routed. If there was a fibre optic cable directly from my house all the way to Sydney, we’d be able to achieve light speed. However, there isn’t, the request had to be routed through lots of different channels (connections between 2 different locations).

Considering how much we use and depend on the Internet, its easy to forget just how impressively it performs, and our computers possess the basic tools needed to enable us to see exactly how data is routed around the Internet between our computer and another.

Another trip to the Windows command line and I type: “tracert 139.130.4.5”.

This is the trace route command, and it will piece together exactly the route the ping request data took to get there.

In my case the first channel of the route is a 1ms trip from my laptop to my router over WiFi. It then makes a 20ms trip to my ISP, where it was passed around between a few switches, before by 24ms, on its 7th hop arriving at IP 195.66.236.166, the London Internet Exchange (LINX). From there, it takes 140ms to arrive in Hong Kong at IP 202.40.149.146, checking in a couple of times along the way. The next stage in its journey is a big one, taking a 340ms to arrive in Sydney at IP 203.50.13.97.

The journey the Internet traffic took can be summarised as follows:

Hop(s)  | Time   | Location
1         1 ms     WiFi Router
2-7       24 ms    My ISP
8         24 ms    London Internet Exchange
9-10      140 ms   Hong Kong
11-14     340 ms   Sydney
15        340 ms   Final Destination

As you can see, it took 15 hops in total for the data to get from my computer in the UK to its destination in Australia. The times mentioned are actually round-trip, so the trip to Hong Kong actually took 70ms one-way, but involved 20 hops round-trip.

If you visit an Australian website from the other side of the world, you don’t give a second thought to the route the data took, it all just happens really quickly and behind the scenes. This is exactly how the Lightning Network will work. It sounds incomprehensible to the end user, but so does the Internet which we take for granted.

The Lightning Network is the perfect solution to Bitcoin’s scaling problem. While it sounds complicated, and its hard to imagine what it will be like to use, thanks to the great minds working on it, its probably going to feel pretty similar to how we use Bitcoin now, except with confirmation times of less than a second and at near zero-cost.

The Bitcoin Classic developers should use the block size increase to showcase their abilities

Bitcoin Classic has, for me, had a questionable start. The team that wants to lead development of the Bitcoin protocol, a technology with a market capitalisation approaching $6 billion, has done a few things that have concerned me. One of their team, Michael Toomim, boasted about being high in an IRC chat with Core developers, while his brother, Jonathan, admitted that their plan is to just copy segregated witness code from Core when its ready.

My biggest concern however, is that Classic are simply taking all the hard work and years of effort that has gone into maintaining Bitcoin Core, tweaking a tiny bit of code, and then taking control of the whole project.

Anyone with even a basic level of coding could probably work out how to increase the block size limit to 2MB themselves… it’s not complicated.

I’m not closed minded to the idea that there are teams out there other than Core who possess the skill set to advance Bitcoin, I just want to see proof of this before we go handing over the keys.

It has previously been established that increasing the block size through a soft fork is technically possible, but more complex to code.

It is likely that the Bitcoin block size is going to have to be increased on multiple occasions as the network grows. Why clunkily kick old software off the network every time through a hard fork, when we can instead preserve some legacy functionality and gracefully degrade older clients?

Doing the legwork now and coding a resilient soft fork solution for future block size increases would showcase the capabilities of whoever implemented it.

In summary, we’d move to a two block system. Old clients would see the 1MB ‘legacy’ block, while those who had upgraded would also see an additional expanded block of say 8MB.

Transactions that occurred on the expanded blocks would be invisible to older clients, but they would be protected from double spends and being kicked off the network completely.

Once coded, future block size increases could easily be rolled out using the same mechanism. All the software needs to know is the maximum block size it can accept. The updated software would accept blocks upto 8MB.

The next update could increase the maximum block size to 32MB for example, and miners would start mining 8MB and 32MB blocks.

In that case, the 1MB legacy blocks would finally disappear. At this point anybody who still hadn’t updated their old software would be kicked off the network after a long grace period, and 8MB would become the legacy block size.

The network would go from 1MB+8MB blocks to 8MB+32MB blocks, with the newest software recognising 32MB as the maximum block size. In addition, the software could provide alerts that the main chain is now mining larger blocks and older software requires an upgrade to participate fully.

This change would require an initial outlay of effort, but would provide future benefit for Bitcoin, eliminating much of the contentiousness and fear that surrounds hard forks.

I’m open minded. If the Classic developers or any other team step up to the challenge and show themselves capable and willing to look at more complex solutions in the best long term interests of Bitcoin, they will earn my support. As long as they’re a cover band, I’ll stick with the original artist.

photo credit: my drum kit 2 via photopin (license)

A question for Core developers on the block size: let’s find the middle ground

I’m a fan of Bitcoin Core. They’ve shown themselves up to the task of delivering a reliable and revolutionary software, an incredibly strong foundation upon which Bitcoin has been built.

On top of this, they have a vision for the future that is truly scalable.

The fears coming from the bigger blocks side of the debate are that full blocks are going to create a fee event and lead to a slow down in processing transactions.

I’m sure most people would agree that if fees got ‘too’ high, or transactions became ‘too’ slow, any potential risks from implementing hard fork and increasing centralisation would be outweighed by the serious damage that would be inflicted upon Bitcoin’s reputation and usefulness.

While I agree that full blocks and slower transactions in the short term do not have to signal disaster, I do have a threshold where I think an increase in block size should be urgently implemented.

Everybody has a different threshold, for some, any increase in fee will signal disaster, others may think $1000 fees per transaction are acceptable.

I know from the Core team their roadmap for scaling. I just would appreciate some clarification from them as to where they would draw the line push out a block size increase if network performance or fee prices were getting bloated.

Perhaps the Core developers are closer to the big blockers than we all realise. If we can have a pre-agreed criteria as to what conditions would trigger an emergency block size increase, then the community could unite.

Let’s give a completely hypothetical example, maybe 7 consecutive days where a 15 cent fee fails, on average, to reach the block chain within an hour, maybe that could be the threshold that would trigger the Core developers to release a hard fork block size increase. Some sort of threshold would eliminate all the uncertainty which has spawned so much of the tension.

I’d just like some indication from Core as to where they stand on this. If they turn around and say they think up to $10 average fees in 2016 would be acceptable and why, I can make an objective decision about whether I share their vision. At the moment I feel a little like I’m making that decision blind.

Just some sort of clarification could help make the community feel more at ease, and diffuse the tension that has been building. This dispute has been been a hotbed of fear, uncertainty and doubt, transparency is the best weapon we have against them.

Forking crazy: a dramatic speculation on how a contentious hard fork could play out

Bitcoin blocks are full. Every 10 minutes, around 1MB of transactions make their way into the block chain, the network is at capacity.

The community is divided on what happens next. Little do they know that what seems like a simple choice between competing clients could turn into a plot worthy of Hollywood with subterfuge, subversion and excitement in abundance.

The Core team of developers are behind the software that has powered Bitcoin for the last 7 years. Their plan, Segregated Witness, is to start moving some of the transaction data outside the blocks to create more space inside them and can be introduced through a soft fork.

The Core developers also have a revolutionary idea, the Lightning Network, which they say will allow Bitcoin to scale substantially without creating huge and centralising demands on the block size.

A new team of developers, named Classic, think this is will not happen quickly enough, and want to create additional capacity now by simply doubling the block size through a hard fork.

I previously explained the difference between a soft and hard fork, but for this article all you need to know is that a hard fork is generally considered more dangerous because people who don’t upgrade their software will no longer be connected to the same Bitcoin network as everyone else, which could create problems.

What is playing out is a battle of ideologies. One side believes in patiently planning for the future, the other thinks that short term capacity issues are far more pressing to prevent Bitcoin losing momentum.

In reality, the Bitcoin network could probably hard fork to 2MB blocks without major incident, and the network can probably operate at full capacity for a while without a major impact on growth. The doomsday scenario painted by either side is overblown, but since this battle dictates the future direction for Bitcoin, its importance is not.

Realising this is about power allows us to more accurately speculate on the dynamics that could play out. The first assumption to make is that as both sides believe they are right, they will use all tools available to them to secure their vision.

Things that may feel like a malicious attack to one side, would be considered a moral necessity to protect Bitcoin by the other. All actions are likely to be good intentioned by those carrying them out so try not to take them personally, remember, all parties believe they are acting in the best interests of Bitcoin.

With that in mind, let us speculate on how the Core team may react to Classic’s attempt to hard fork Bitcoin to 2MB blocks.

For a hard fork to be successful it needs consensus. A perfect hard fork would have 100% consensus, everyone would have updated their software before the fork activated and nobody would be separated from the main network.

On paper, it seems miners hold the power in determining whether a hard fork has consensus. A threshold is set for activation based on the number of recent blocks mined that support a fork, say 75%.

In reality, its not that simple. Its easy to measure miner support in this way, but even if 99% of miners are in favour, if nobody else changed their software, those miners would just be mining amongst themselves, their newly minted coins worthless. Everybody else continues using the existing network which can be considered to have ‘won’.

Even then, its not that simple. If a hard fork to Bitcoin Classic was successfully achieved through 100% miner consensus but nobody else upgraded their software, the existing network could not be considered to have ‘won’ because it is dependent upon miners to function. It would actually be rendered completely useless as no blocks would be created and no transactions could take place.

The most likely outcome is that if 75% of miners reached consensus, most of the remaining 25% would follow suit for fear of ‘losing out’. This is the most rational action to take, and consequently 75% should actually be enough to secure a successful hard fork.

The Core network would be rendered useless.

Even in the best case scenario for Core, that 25% of miners continued to work on the their side of the fork, seemingly against their economic interests, the network would still be severely disrupted. Instead of 10 minute blocks, it would take 40 minutes to process 1MB of transactions, the Classic network could handle 600% this capacity (2MB every 13.3mins). The Core network would be rendered useless.

Following defeat in a hard fork, how could Core cling on to some hope and stay in the game? The answer comes in the form of a new hard fork of their own, deployed very quickly.

That hard fork could introduce any number of ideas, many of which may have already been coded behind the scenes in anticipation. The absolute minimum requirement would be lowering the difficulty level so that the their network could continue to function.

The next priority might be to keep their network as closely synchronised with Classic’s as possible – the further the competing sides of the fork drift apart, the less likely it is anyone would ever switch away from the more dominant one.

To help stay synchronised, it would probably make sense for Core to increase the block size to 2MB. This sounds counter productive, since this is exactly what they have been trying to avoid, but it would be essential to keep their network as closely synchronised as possible and prevent transactions that wouldn’t fit into a 1MB block disappearing from their side of the fork.

This would be life support, Core would be limping along next to Classic, running a separate but similar network.

As long as Core still has some skin in the game, they will be seen as a threat by Classic. Miners from the Classic side of the fork could conspire to attack the Core network. With the Core difficulty level decreased, attacks against the network would be far easier.

This would not be a malicious attack, a reprisal for allowing the network to be split, it would simply be an act of self preservation by the miners to protect their investment and should be considered the most logical and inevitable action for them to take.

With that in mind, another change Core could introduce in a hard fork would be to alter the Proof of Work (PoW). One small change to one line of code would render useless every bit of specialised mining hardware the network currently uses, and help mitigate the risk of attack from the Classic network. Far from a nuclear option, it would be a logical act of self defence that could also gain support from the altcoin community, whose hardware could be useful under a different PoW.

Changing the PoW as an idea isn’t just hypothetical, it has already been coded, ironically by a Core developer, as suggestion for the Classic source code. This was more likely an early act of subterfuge than a serious proposal, a way for Core to flex their muscles and show there are options open to them and they could be willing to put up a fight.

 

So what would happen next? With updated software, the Core developers would have a Phoenix network, resurrected from the ashes. This new network would be viewed by the community as being decentralisation driven and backed by a group of talented developers with a proven track record. It is possible many would find such a network an attractive proposition, and the network could potentially compete with Classic.

history shows us that one format will triumph

This would not be the end of the world; divergence is not uncommon in new technologies. Betamax vs VHS; Bluray vs HD DVD, SD cards vs many others, history shows us that one format will triumph. Core vs Classic could become a noteworthy addition to the list, but ultimately one will win the format war of Bitcoin while the other will end up in the bargain bin with the rest of the altcoins.

What I have speculated would require alignment of a number of factors to be viable. Crucially, it would require sufficient appetite from the Core developers to put up such a potentially demanding and drawn out fight. While some may be keen for the challenge, without overwhelming developer consensus, the prospect of success is severely diminished.

More powerful at this stage is just the idea that they’d be prepared to put up a fight. Any miner that says they support Classic only needs a quiet word in their ear from someone in the Core team suggesting they might create a fork that risks making their millions invested in hardware worthless, and they just may be tempted to change their mind. Whether Classic can gain consensus, and whether Core would pursue this path remain to be seen, but if they did, it would make fascinating viewing.

The Bitcoin consensus problem: Scientific truths cannot be decided by popular opinion or the free market

Bitcoin has a problem.

It used to be a geeky niché interest: a technical curiosity that only a handful intellectuals could understand, accompanied by libertarian groupies who swooned over its promise. This was an optimistic time, a community of people with different backgrounds, but one common cause.

Back then, people who didn’t understand the technology would allow those that did to get on with building it, their judgement was trusted implicitly.

Then the world’s media took notice. After becoming household name, millions jumped on the bandwagon, Bitcoin became popular.

With popularity comes problems: popular opinion is not always right.

The previous arrangement that the lead team of developers are trusted to make technical decisions is being altered. People with no technical understanding are demanding solutions which the technically minded have great concerns about.

The biggest issue revolves around the block size debate. The Core team of developers believe, amongst other things, that increasing the block size limit now through a hard fork is risky because:

  1. Without optimisations the great firewall of China could struggle to pass larger blocks quickly enough which could lead greater centralisation, undermining a basic principle of Bitcoin: its decentralised nature.
  2. Its not as simple as fixing ‘one line’ of code as has been suggested. Without additional work there are malicious transactions which could harm the network by alone taking over 10 minutes to verify in larger sized blocks.
  3. A hard fork is inherently risky and should only be used when absolutely necessary, which right now they don’t believe it is.

The Core team of developers has decided instead to pursue other more technically complex solutions which they believe will solve the scaling problem in the best long term interests of Bitcoin. Everybody is in agreement that Bitcoin needs to scale.

This has been countered by various alternative solutions by developers who have not demonstrated the same technical skill, but have shown themselves better equipped at engaging and appeasing the masses.

Let’s consider what that means. Imagine you’re starting to feel ill. You go to the doctor who tells you not to worry as there is a pill that will leave you feeling a little unwell in the short term, but to take it easy for a few weeks and before long you’ll be feeling better than ever.

Outside the doctors surgery is a stall set up with a crowd of people who have been whipped up into a frenzy hailing a miracle cure that will make everything better. The doctor warned you about this, and said that the cure offered might make some of your symptoms disappear in the short term, but in the long term it could cause serious and irreversible damage to your health and we cannot yet be certain of the side effects.

At the moment, the Bitcoin community looks like it is heading towards the latter option, ignoring the advice of the people who have delivered near perfect health to the network for the last 7 years.

Scientific truths cannot be determined by consensus.  If bigger blocks do slow down block propagation and jeopardise the integrity of Bitcoin, it doesn’t matter if 100% of users think that it won’t.

Ultimately, yes, there are very capable people who think the block size can be safely increased. However while popular opinion consensus is very easy to measure, it is almost impossible to measure intellectual consensus.

The Core developers are the only people with a proven track record. This means, for me, they represent scientific consensus. Deviating away from their leadership leads us into uncharted waters. This dispute is about more than a tiny change of code, it’s about the direction of Bitcoin, and whether scientific consensus can be superseded by the free market.

If you are concerned that full blocks will cause the network to struggle, and a temporary fee market to form that will have a major impact, why not give the core team a chance to prove you right? Bitcoin is still experimental, temporarily seeing how it performs at the limits of capacity could provide very useful lessons for the future. If the network does start to struggle, intellectual consensus will form very quickly around increasing the block size, but it is not there yet, despite all the debate.

Sadly, I feel like a block size increase against the wishes of the Core developers is increasingly inevitable. I don’t actually think this block size increase will cause measurable harm in the short term. The harm comes further down the line once the precedent has been set, and empowered by being “proven right on the block size debate”, the masses demand further compromises of an inexperienced team of developers willing to please.