2
0
mirror of https://github.com/lnbook/lnbook synced 2024-11-04 18:00:26 +00:00

Merge branch 'develop'

This commit is contained in:
Andreas M. Antonopoulos 2020-09-02 12:44:21 -04:00
commit 562270c628
34 changed files with 2639 additions and 889 deletions

View File

@ -12,7 +12,7 @@ While the bulk of this book is written for programmers, the first two chapters a
=== Motivation for the Lightning Network
As Bitcoin and the demand for transactions grows, the number of transactions in each block will increase until eventually hitting the block size limit. When blocks are full, excess transactions are left to wait in a queue. Many users increase the fees they're willing to pay in order to buy space for their transaction in the next block. At the same time, an increasing number of users are left behind. Their transactions, e.g. microtransactions such as common small spendings, are not economically qualified to be on the network. However, increasing block size simply shifts the problem to node operators, where each increase in blocksize results in a resource increase multiplied by an order of magnitude.
As Bitcoin and the demand for transactions grows, the number of transactions in each block will increase until it eventually hits the block size limit. When blocks are full, excess transactions are left to wait in a queue. Many users increase the fees they're willing to pay in order to buy space for their transaction in the next block. At the same time, an increasing number of users are left behind. Their transactions, e.g. microtransactions such as common small spendings, are not economically qualified to be on the network. However, increasing block size simply shifts the problem to node operators, where each increase in blocksize results in a resource increase multiplied by an order of magnitude.
Because blockchains are gossip protocols, each node is required to know and validate every single transaction that occurs on the network. Furthermore, once validated, each transaction and block must be propagated to the node's "neighbors", multiplying the bandwidth requirements. As such, the greater the block size, the greater the bandwidth, processing, and storage requirements for each individual node, effectively limiting the amount of scaling that can be done this way. Furthermore, scaling in this fashion has an undesirable side effect of centralizing the network by reducing the number of nodes and node operators. Since node operators are not compensated for running nodes, if nodes are very expensive to run, only a few well funded node operators will continue to run nodes.

View File

@ -9,6 +9,25 @@ The Lightning Network is accessed via software applications that can speak the L
Users have the highest degree of control by running their own Bitcoin node and Lightning node. However, Lightning nodes can also use a lightweight Bitcoin client (commonly referred to as Simplified Payment Verification (SPV)) to interact with the Bitcoin blockchain.
=== Lightning Explorer
A Lightning network explorer is a useful tool to show the statistics of nodes, channels and network capacity.
The below is an inexhaustive list (in alphanumerical order):
* https://1ml.com/, 1ML Lightning explorer
* https://explorer.acinq.co/, ACINQ's Lightning explorer, with fancy visualization
* https://lightblock.me/, Lightblock Lightning explorer
* https://hashxp.org/lightning/node/, hashXP Lightning explorer
[TIP]
====
Note that if using Lightning explorers, just like in existing block explorers,
the privacy can be a concern because, if users are careless, the website may track their IP addresses and collect their behavior records (for example, the nodes users are interested in).
Also what should be noted is that, as there is no global consensus of the current Lightning graph, nor the current state of any existing channel policy, users should never rely on Lightning explorers to retrieve the most updated information. That is, Lightning explorers should only be used to gather loosely statistics of Lightning Network.
====
=== Lightning Wallets
The term "Lightning Wallet" is somewhat ambiguous, as it can describe a broad variety of components combined with some user interface. The most common components of lightning wallet software include:
@ -41,10 +60,10 @@ The most important questions to ask about a specific wallet are:
- Does this Lightning wallet have a full Lightning Node or does it use a third-party Lightning Node?
- Does this Lightning wallet have a full Bitcoin Node or does it use a third-party Bitcoin Node? footnote:[If a Lightning wallet uses a third-party Lightning node, it is this third-party Lightning node who decides how to communicate with Bitcoin. Hence, using a third-party Lightning node implies that you as a wallet user also use a third-party Bitcoin node. Only in the other case, when the Lightning wallet uses its own Lightning node, does the choice "full Bitcoin-node" vs. "third-party Bitcoin node" exist. ]
- Does this Lightning wallet store its own keys under user control (self-custody) or are the keys held be a third-party custodian?
- Does this Lightning wallet store its own keys under user control (self-custody) or are the keys held by a third-party custodian?
At the highest level of abstraction, question 1 and 3 are the most elementary ones.
From these two questions we can derive four possible categories.
At the highest level of abstraction, questions 1 and 3 are the most elementary ones.
From these two questions, we can derive four possible categories.
We can place these four categories into a quadrant as seen in Table <<lnwallet-categories>>.
But remember that this is just one way of categorizing Lightning wallets.
@ -112,7 +131,7 @@ Alice does not want to entrust custody of her bitcoin to third parties. She has
When looking for a new cryptocurrency wallet, you must be very careful to select a secure source for the software.
Unfortunately, many fake wallet applications will steal your money, and some of these even find their way onto reputable and supposedly vetted software sites like the Apple and Google application stores. Whether you are installing your first or your tenth wallet, always exercise extreme caution. A rogue app cannot only steal any money you entrust it with, it might also be able to steal keys and passwords from other applications by compromising your mobile device operating system.
Unfortunately, many fake wallet applications will steal your money, and some of these even find their way onto reputable and supposedly vetted software sites like the Apple and Google application stores. Whether you are installing your first or your tenth wallet, always exercise extreme caution. A rogue app cannot only steal any money you entrust it with, but it might also be able to steal keys and passwords from other applications by compromising your mobile device operating system.
Alice uses an Android device and will use the Google Play Store to download and install the Eclair wallet. Searching on Google Play, she finds an entry for "Eclair Mobile", as shown in <<eclair-playstore>>.
@ -185,14 +204,14 @@ There are several ways Alice can acquire bitcoin:
* She can exchange some of her national currency (e.g. USD) at a crypto-currency exchange
* She can buy some from a friend, or an acquaintance from a Bitcoin Meetup, in exchange for cash
* She can find a _Bitcoin ATM_ in her area, which acts as a vending machine, selling bitcoin for cash
* She can offer her skills or a product she sells and accept payment in bitcoin
* She can offer her skills or a product she sells and accepts payment in bitcoin
* She can ask her employer or clients to pay her in bitcoin
All of these methods have varying degrees of difficulty, and many will involve paying a fee. Some will also require Alice to provide identification documents to comply with local banking regulations. However, with all these methods, Alice will be able to receive bitcoin.
==== Receiving Bitcoin
Let's assume Alice has found a local Bitcoin ATM and has decided to buy some bitcoin in exchange for cash. An example of a Bitcoin ATM, one built by the Lamassu company, is shown in <<bitcoin-atm>>. Such Bitcoin ATMs accepts national currency (cash) through a cash slot and send bitcoin to a Bitcoin Address scanned from a user's wallet using a built-in camera.
Let's assume Alice has found a local Bitcoin ATM and has decided to buy some bitcoin in exchange for cash. An example of a Bitcoin ATM, one built by the Lamassu company, is shown in <<bitcoin-atm>>. Such Bitcoin ATMs accept national currency (cash) through a cash slot and send bitcoin to a Bitcoin Address scanned from a user's wallet using a built-in camera.
[[bitcoin-atm]]
.A Lamassu Bitcoin ATM
@ -253,7 +272,7 @@ Furthermore, Alice's channel peer can _forward_ payments via other channels furt
In other words: Alice needs one or more channels that connects her to one or more other nodes on the Lightning Network. She doesn't need a channel to connect her wallet directly to Bob's Cafe in order to send Bob a payment, though she can choose to open a direct channel too. Any node in the Lightning Network can be used for Alice's first channel. The more well-connected a node is the more people Alice can reach. In this example, since we want to also demonstrate payment routing, we won't have Alice open a channel directly to Bob's wallet. Instead, we will have Alice open a channel to a well-connected node and then later use that node to forward her payment, routing it through any other nodes as necessary to reach Bob.
At first, there are no open channels, so as we see in <<eclair-tutorial2.png>>, the "LIGHTNING CHANNELS" tab displays an empty list. If you notice, on the bottom right corner, there is a plus symbol (+), which is a button to open a new channel.
At first, there are no open channels, so as we see in <<eclair-channels>>, the "LIGHTNING CHANNELS" tab displays an empty list. If you notice, on the bottom right corner, there is a plus symbol (+), which is a button to open a new channel.
[[eclair-channels]]
.Lightning Channels Tab

File diff suppressed because it is too large Load Diff

View File

@ -32,17 +32,16 @@ The current status of the book is "IN PROGRESS". See below for status of specifi
| [LN Basics (How LN Works)](03_how_ln_works.asciidoc) | ########################## | :mag: |
| [Intro to LN Routing (HTLCs)](routing.asciidoc) | ###################### | :lock_with_ink_pen: |
| [Nodes (LN Clients)](node_client.asciidoc) | #################### | :mag: |
| [Operating a Node](node_operations.asciidoc) | # | :bookmark_tabs: |
| [Operating a Node](node_operations.asciidoc) | ################# | :bookmark_tabs: |
| [P2P Communication](p2p.asciidoc) | # | :bookmark_tabs: |
| [Channel Construction in Detail](channel-construction.asciidoc) | ### | :bookmark_tabs: |
| [Onion Routing and HTLC forwarding](onion-routing-htlc-forwarding.asciidoc) | # | :bookmark_tabs: |
| [Channel Construction in Detail](channel-construction.asciidoc) | ######### | :lock_with_ink_pen: |
| [Channel Graph and Gossip Layer](channel-graph.asciidoc) | # | :bookmark_tabs: |
| [Payment Path Finding](path-finding.asciidoc) | # | :bookmark_tabs: |
| [End-to-End Payment Presentation Layer](e2e-presentation-layer.asciidoc) | # | :bookmark_tabs: |
| [Payment Path Finding](path-finding.asciidoc) | ############## | :bookmark_tabs: |
| [End-to-End Payment Presentation Layer](e2e-presentation-layer.asciidoc) | ## | :bookmark_tabs: |
| [Lightning Applications (LApps)]() | # | :thought_balloon: |
| [LN's Future]() | # | :thought_balloon: |
Total Word Count: 50770
Total Word Count: 71133
Target Word Count: 100,000-120,000

View File

@ -1,38 +1,219 @@
Payment channels are the core and most fundamental building block of the Lightning Network.
Of course, every detail of a technology is exists for a reason but the Lightning Network is literally built around the idea and concept of payment channels.
In the previous chapters you have already learned about payment channels, what properties they have, and on a high level how they work and how they can be constructed.
Of course, every detail of a technology exists for a reason and is important.
However the Lightning Network is literally built around the idea and concept of payment channels.
In the previous chapters you have already learned that payment channels
* allow two peers who created it to send and receive Bitcoin up to the amount specified by the capacity of the channel as often as they want to.
* split the capacity of the channel into a balance between the two peers which - as long as the channel is open is only known by the owners of the channel and increases privacy.
* do not require peers to do any additional onchain transactions other than the one needed to open and - potentially at a later state - to close the channel.
* can stay open for an arbitrary time. Potentially in the best case forever.
* do not require peers to trust each other as any attempt by a peer to cheat would enable the other peer to receive all the funds in the channel as a panality.
* can be connected to a network and allow peers to send money a long a path of connected channels without the necessity to trust the intermediary nodes as they have no ability to steal the Bitcoin that are being forwarded.
*
In this chapter we will dig deeper into the protocol details that are needed to open and close payment channels.
Working through this rather technical chapter you will be able to understand how the protocol design achieves the properties of payment channels.
Where necessary some information from the first chapters of this book will be repeated.
If you are new to the topic we highly encourage you to start there first.
If you however already know a fair share about bitcoin script, OP_CODES and protocol design it might be sufficient to skip the previous chapter and start here with us.
This books follows the construction of payment channels as described in BOLT 02 which is titled `peer protocol` and describes how two peers communicate to open, maintain and close a channel.
In this section we will only talk about maintaining and closing a channel.
The operation of a channel which means either making or forwarding a payment is discussed in our chapter about routing.
Also other constructions of payment channels are known and being discussed by the developers but as of writing only the channels as described in BOLT 2 are supported by the protocol and the implementations.
There will be one big difference though.
We will only discuss opening and closing a channel.
The operation and maintainance of a channel which means either making or forwarding a payment is discussed in our chapter about routing.
Also other constructions of payment channels are known and being discussed by the developers.
Historically speaking these are the Duplex Micropayment channels introduced by Christian Decker during his time as a PhD student at ETH Zuric and the eltoo channels which where also introduced by Christian Decker.
The eltoo channels are certainly a more elgant and cleaner why of achieving payment channels with the afore mentioned properties.
However they require the activation of BIP 118 and a softfork and are - at the time of writing - a potential future protocol change.
Thus this chapter will only focus on the pentalty based channels as described in the Lightning Network Whitepaper and specified in BOLT 02 which are currently supported by the protocol and the implementations.
To repeat what you should already know a payment channel is encoded as an unspent 2 out of 2 multisignature transaction output.
The capacity of the channel relates to the amount that is bound to the unspent 2 out of 2 multisignature transaction output.
It is opened wht the help of a funding transaction that sends bitcoin to a 2 out of 2 multisignature output together with a communication protocol that helps to initialize its state.
[NOTE]
====
The Lightning Network does not need consensus of features across it's participants.
If the Bitcoin Softfork related to BIP 118 activates and people implement eltoo channels nodes that support eltoo can create payment channels and the onion routing of payments a long a path of channels would work just fine even if some of the channels are the modern eltoo channels or some channels are the legacy channels.
Actually when Lightning network connections are established nodes signal feature bits of global and local features that they support.
Thus havning the ability to create eltoo channels would just be an additional feature bit.
In this sense upgrading the Lightnign Network is much easier than upgrading Bitcoin where consensus among all stakeholders is needed.
====
Let's quickly summarize what you should already know about payment channels on a technical level and for what you will learn the details in this chapter.
A payment channel is encoded as an unspent 2-of-2 multisignature transaction output.
The capacity of the channel relates to the amount that is bound to the unspent 2-of-2 multisignature transaction output.
It is opened with the help of a funding transaction that sends bitcoin to a 2-of-2 multisignature output together with a communication protocol that helps to initialize and maintain its state.
The balance of the channel encodes how the capacity is split between the two peers who maintain the channel.
Technically it is encoded by a the most recent pair of a sequence of pairs of similar (but not equal) presigned commitment transactions.
Each channel partner has both signatures for on of the commitment transactions from the sequence of pairs.
The split of the capacity is realized by a `to_local` and a `to_remote` output that is part of every commitment transaction
Technically the balance is encoded by a the most recent pair of a sequence of pairs of similar (but not equal) presigned commitment transactions.
.Bitcoin, Lightning and "Ownership" of funds
****
When someone says they 'own' bitcoin they typically mean that they know the private key of a bitcoin address that has some unspent transaction outputs (UTXOs).
The private key allows the owner to produce a signature for a transaction that spends the bitcoin by sending it to a different address.
Thus 'ownership' of bitcoin can be defined as the ability to spend that bitcoin.
If you have an unpublished but signed transaction from a 2-of-2 multisignature address, where some outputs are sent to an address you own, and additionally you own one of the private keys of the multisignature address, then you effectively own the bitcoin of that output.
Without your help no other transaction from the 2-of-2 multisignature address can be produced.
However, without the help of anybody else you can transfer the funds to an address which you control.
On the Lightning Network ownership of your funds is almost always encoded with you having a pre-signed transaction spending from a 2-of-2 multisignature address.
****
These commitment transactions should never hit the blockchain and serve as a safty net for the participants in case the channel partner becomes unresponsive of disappears.
They are also the reason why the Lightning Network is called an offchain scaling solution.
Each channel partner has both signatures for one of the commitment transactions from the sequence of pairs.
The split of the capacity is realized by a `to_local` and a `to_remote` output that is part of every commitment transaction.
The `to_local` output goes to an address that is controlled by the peer that holds this fully signed commitment transaction.
`to_local` outputs, which also exist in the second stage HTLC transactions, have two spending conditions.
The `to_local` output can be spent either at any time with the help of a revocation secrete or after a timelock with the secret key that is controlled by the peer holding this commitment transaction.
`to_local` outputs, which also exist in the second stage HTLC transactions - which we discuss in the routing chapter - have two spending conditions.
The `to_local` output can be spent either at any time with the help of a revocation secrete or after a timelock with the secret key that is controled by the peer holding this commitment transaction.
The revocation secrete is necessary to economically disincentivice peers to publish previous commitment transactions.
Addresses and revokation secretes change with every new pair of commitment transactions that are being negotiated.
The Lightning Network as a protocol defines the communication protocols that are necessary to achieve this.
### Security of a Payment channel
While the BOLTs introduce payment channels directly with the opening protocol we have decided to talk about the security model first.
The security of payment channels come through a penalty based revocation system which help two parties to split the capacity of the payment channel into a balance sheet without the necessity to trust each other.
In this chapter we start from an insecure approach of creating a payment channel and explain why it is insecure.
We will then explain how time locks are being used to create revokable sequence maturity contracts that create the penality based revokation system which economically incentivizes people maintain the most recent state.
After you understood these concepts we will quickly walk you through the technical details of opening and closing a channel.
Any known payment channel constuction uses a 2-of-2 multisgnature output as the basis of the payment channel.
We call the amount that is attached to this output the capacity of the channel.
In every case, both channel partners hold 1 secret key of the multisignature address which means that they can only collaboratively control the funds.
#### An example for a highly insecure payment channel construction
Let us assume Alice does not know the details about the Lightning Network and naivly tries to open a payment channel in a way that will likely lead to the loss of her funds.
Alice has heard that payment channel are 2-of-2 multisignature outputs.
As she wants to have a channel with Bob and since she knows a public key from Bob she decides to open a channel by sending money to a 2-of-2 multisignature address that comes from Bob's and her key.
We call the transaction that Alices used a **funding transaction** as it is supposed to fund the payment channel.
However signing and broadcasting this funding transaction would be a huge mistake.
As we have discussed the Bitcoins from the resulting UTXO can only be spent if Alice and Bob work together and both provide a signature for a transaction spending those coins.
If Bob would not respond to Alice in future Alice would have lost her Bitcoins forever.
That is because the coins would be stuck in the 2-of-2 multisignature address to which she has just sent them.
Luckily Alice has previously read Mastering Bitcoin and she knows all the properties of Bitcoin script and is aware of the risks that are involved with sending Bitcoins to a 2-of-2 multisignature address to which she does not control both keys.
She is also aware of the "Don't trust. Verify" principle that Bitcoiners follow and doesn't want to trust Bob to help her moving or accessing her coins.
She would much more prefere to keep control over her coins even though they shall be stored in this 2-of-2 multisignature address.
While this seems like an impossible problem, Alice has an idea:
What if she could already prepare a refund transaction (which we call commitment transaction in future) that sends all the bitcoin back to an address that she controls?
Before broadcasting her funding transaction she already prepared and finishes it so that she knows the transaction id.
She can now create the commitment transaction that spends the output of the funding transaction and ask Bob to provide a signature.
At that time Bob has nothing to loose by signing the commitment transaction.
He did not have Coins at the multisig address anyway.
Even if he did Alice intends to spend from an output which Bob never was involved in.
Thus at that point for Bob it is perfectly reasonable to sign the commitment transaction that spends the funding transaction.
On the other side you as a reader might think:
Why would Alice send money to a multisignature address just to prepare a transaction that sends the money back to her?
We really hope you have wondered about this because this is really the point where the innovation begins.
Just because in general people are expected to broadcast a transaction to the bitcoin network as soon as they have signed it noone forces you to do that.
As Alice would loose access of her Bitcoins once she sends it to a 2-of-2 multisignature output for which she only controls one key, she needs to make sure that she will be able to regain access of her coins in case Bob becomes unresponisive.
Thus before Alice publishes the funding transaction she will create another transaction that sends all the bitcoin from the 2-of-2 multisignature output back to an address which she controls.
.The situation can be seen in the following picture
image:channel-construction-opening-1.png[]
Of course for the commitment transaction Alice would need to get a signature from Bob before she can safely broadcast the funding transaction.
After publishing the funding transaction instead of braodcasting the commitment transaction she will keep it in a safe place.
For this to work Alice needs to be sure that the funding transaction could not be published with a different transaction id.
This malleability was possible before the Segwit upgrade of Bitcoin.
We will discuss the details later but didn't want to leave them out here.
[NOTE]
====
This entire process might be surprising (... comparison with HTTP server push and AJAX...)
====
Having Segwit and this first commitment transaction is actually secure for Alice.
We have seen the first of three main properties that commit transactions fulfill:
Commitment Transactions refund channel participants in case the other side becomes irresponsive.
The second purpose was implicitely defined by the first purpose:
Commitment Transactions split the capacity of the channel into a balance which is owned by each partner.
Initially this split means that all the capacity is naturally on the side of the partner who funded the channel.
Of course during the lifetime of the channel the balance could change.
For example Alice might want to send some funds to Bob.
This could happen because she wants to pay Bob or because she wants Bob to forward the funds through a path of channels to another merchant that she wants to pay.
Let us assume as an example that Alice wants to send 30k Satoshi to Bob.
For now we can assume that through some communication protocol Alice and Bob would negotiate a double spent of the funding transaction output of 100k satoshi.
The new commitment transaction for which Alice and Bob would exchange signatures would send 70k satoshi to Alice and 30k Satoshi to Bob.
The situation can be seen in the following picture
image:channel-construction-opening-2.png[]
Whenever Alice and Bob want to change the balance of the payment channel they will negotiate a new commitment transaction.
Effectively they double spend the funding transaction output.
But as the commitment transactions are not broadcasted - as long as the channel stays open - they will be able to do that.
At this point we want to emphasize that the section was labeled in a way that suggests that this construction is insecure.
So the main question reads:
What can go wrong with the insecure payment channel?
The thing that goes and makes this construction insecure lies within the mechanics of Bitcoin.
The key innovation of Bitcoin was to prevent the double spending problem of electronic coins.
After Alice and Bob have exchanged signatures for the second commitment transaction Bob cannot rely on the fact that he really owns 30k satoshi.
Of course he could close the channel by publishing the second commitment transaction assigning 30k satoshi to an address that he controls.
But similarly Alice could broadcast the first commitment transaction and transfer the entire capacity of the channel back to an address that she controls.
As Bitcoin prevents double spending of the funding transaction miners will include only one of the two commitment transactions.
Thus we need to adapt the idea with the commitment transactions to create the ability to revoke an old commitment transaction.
Regarding the fact that Bob and Alice both have a copy of the transaction and that Bob cannot control the data that Alice has stored on her hardware, it seems pretty hopeless.
Luckily, the scripting language in Bitcoin allows at least for changing commitment transactions in a way that economically disincentivises channel partners from publish an outdated balances after they have negotated a new balance.
#### Secure Payment channels via Revokable Commitment transactions
[NOTE]
====
In summary we can conclude that commitment transactions fulfill three purposes:
1. Refund channel participants in case the other side becomes irresponsive
2. Split the capacity of the channel into the current balance that peers have agreed upon.
3. Allow revocation of old state through the means of a penality via a revocable sequence maturity contract.
====
We have not yet explained how channel partners actually communicate to negotiate a new balance.
Because it seems pretty amazing that we can make this swap revocation secret for signature atomic.
In order to understand this we first need to understand the general communication of how a channel is opened.
The actual negotiation of the new state is also done with HTLCs.
That is why we only explain this in the routing chapter and ask you to stay patient.
[NOTE]
====
*TODO: Move this note to routing chapter?*
HTLCS fullfill the following purposes:
1. Make a conditional payment.
2. Help to update a new balance in a channel
3. Make payments through a path of channel atomic, meaning that peers along the path cannot steal funds.
====
### Opening a payment channel
Currently payment channels can only be opened by one side.
We call the process of creating a new payment channel "opening a payment channel".
Currently a payment channel can only exists between exactly two peers.
Therefore you might be surprised to learn that even though two users are owning and maintaining the channel the current construction requires only one user to open the channel.
This does not mean that only one peer is needed to open a channel.
It means that only one peer - namingly the one who opens the channel - provides the funds and capacity for the channel.
Let us assume for the remainder of the section that Alice wants to open a channel with Bob.
Opening a payment channel is not as easy as sending bitcoins to a 2 out of 2 multisignature output.
In a fully functional payment channel the bitcoins are being sent to a 2 out of 2 multisignature address to which each owner controlls a key.
Thus Alice needs to know the public key of Bob which will be part of the 2 out of 2 multisignature address.
She will do that by sending Bob and `open_channel` message signaling her interest to open a channel.
It does however mean that the user who opens the channel also has to provide the bitcoins to fund the channel.
Let us stick to our example where Alice opens a channel with Bob with a capacity of 100k satoshi.
This means that Alice provides 100k satoshi.
Alice will do that by creating a so called funding transaction.
This transaction sends 100k satoshi from an address that she - or her lightning node software controls - to a 2-of-2 multisig address for which she and Bob know 1 secret key each.
The amount of Bitcoin that is sent to the multisig output by Alice is called the capacity of the payment channel.
Thus for the reminder of the chapter in all examples we assume the payment channels that we use as examples already magically exist and the two peers Alice and Bob already have all the necessary data at hand.
[NOTE]
====
Even though Alice and Bob both have a public node key to which they own the private secret opening a payment channel is not as easy as sending bitcoins to the 2 out of 2 multisignature output that belongs to the public keys of Alice and Bob.
Let us assume for a moment that Alice would send 100k Satoshi to the Multisig address resulting from hers and Bob's public node id.
In that case Alice will never be able to maintain her funds back without the help of Bob.
Of course we want our payment channels to work in a way that Alice does not need to trust Bob.
Bob could however refuse to sign a transaction that sends all those outputs back to an address that is controled by Alice.
He would be able to blackmail Alice to assign a significant amount of those Bitcoin to an output address that is controled by him.
Thus Bob can't steel the coins from Alice directly but he can threten Alice to have her coins lost forever.
This example shows that unfortunatelly opening a channel will be a little bit more complex than just sending Bitcoins to a multisignature address.
====
[NOTE]
====
@ -43,10 +224,45 @@ The importance of the segwit upgrade.
In order to avoid the reuse of addresses Alice and Bob will generate a new set of keys for the multisig address that they use to open the channel.
Alice needs to inform Bob which key she intends to use for their channel and ask him which key he intends to use.
She will do that by sending Bob and `open_channel` message signaling her interest to open a channel.
This message contains a lot of additional data fields.
Most of them specify meta data which is necessary for the channel operation and can be be safely ignored for now.
We will only look at the following ones:
* [chain_hash:chain_hash]
* [32*byte:temporary_channel_id]
* [u64:funding_satoshis]
* [point:funding_pubkey]
* [point:revocation_basepoint], [point:payment_basepoint], [point:delayed_payment_basepoint], [point:htlc_basepoint], [point:first_per_commitment_point]
With the `chain_hash` Alice signals that she intends to open the channel on the Bitcoin blockchain.
While the Lightning Network was certainly invented to scale the amount of payments that can be conducted on the Bitcoin Network it is interesting to note that the network is designed in a way that allows to build channels over various currencies.
If a node has channels with more than one currency it is even possible to route payments through multi asset channels.
However this turns out to be a little bit tricky in reality as the exchange rate between currencies might change which might lead the forwarding node to wait for a better exchange rate to settle or to abort the payment process.
For the opening process the final channel id cannot be determined yet thus Alice needs to select a random channel id with Bob that she can use to identify the messages for this channel during the opening phase.
This design descision allows multiple channels to exist between two nodes - though currently only LND supports this feature.
Alice tells Bob for how many satoshis she wishes to open the channel.
This information is necessary to construct the commitment transaction ...
Once the channel is open Alice will be able to send 99k satoshi along this channel.
Bob on the other side will be able to receive 99k satoshi along that channel.
This means that initially Alice will not be able to recieve Bitcoins on this channel and that Bob initially will not be able to send Bitcoin along that channel.
[NOTE]
====
The current construction could be generalized to multiparty channels and channel factories.
However the communication protocol would suffer from increased complexity.
====
Chapter overview:
* describes how channels are put together at the script+transaction level
* details how a channel if funded in the protocol
* details how a channel is updated in the protocol
** including Key derrivation!
* details how a channel is updated in the protocol (moved to routing!)
* describes what needs to happen when a channel is force closed
Relevant questions to answer:
@ -71,7 +287,7 @@ Relevant questions to answer:
* At a high level, how does the MAC protocol for 802.11 work?
* What steps need to happen for a new commitment state to be proposed and irrevocably committed for both parties?
* When is it safe for a party to forward a new HTLC to another peer? (may be out of scope for this chapter)
* Is it possible to commit a
* Is it possible to commit a
* How does the current MAC protocol for the LN work?
* What does an htlc_add message contain?
* How are HTLCs cancelled or settled?

View File

@ -1,21 +1,23 @@
Chapter overview:
* explains the channel graph, and how it's modified+verified
Relevant questions to answer:
* Gossip announcements:
* How does a peer announce a new channel to the network?
* How do nodes verify a channel announcement? Why should they verify one in the first place?
* How does a node control _how_ a payment is routed through its channel?
* What knobs exist for a node to set in their channel updates?
* How often are channel updates sent?
* How does a node update its node in the channel graph? Do we we need to verify this?
* How quickly does an update propagate?
* What are "zombie" channels? Why do they matter?
- How does a peer announce a new channel to the network?
- How do nodes verify a channel announcement? Why should they verify one in the first place?
- How does a node control _how_ a payment is routed through its channel?
- What knobs exist for a node to set in their channel updates?
- How often are channel updates sent?
- How does a node update its node in the channel graph? Do we we need to verify this?
- How quickly does an update propagate?
- What are "zombie" channels? Why do they matter?
* Channel graph syncing:
* What are the various ways a node can sync the channel graph?
* Which is the most efficient?
* What is the "gossip query" system?
* Does a node need to keep up with all gossip updates? Does this change if they're a routing node or mobile client?
- What are the various ways a node can sync the channel graph?
- Which is the most efficient?
- What is the "gossip query" system?
- Does a node need to keep up with all gossip updates? Does this change if they're a routing node or mobile client?
* Protocol Extensions via Feature Bits and TLV:
* How can the channel graph be upgraded using feature bits and TLV fields?
* How does a receiver signal that they can accept MPP/AMP payments?
- How can the channel graph be upgraded using feature bits and TLV fields?
- How does a receiver signal that they can accept MPP/AMP payments?

View File

@ -1,4 +1,4 @@
FROM ubuntu:18.04 AS bitcoind-base
FROM ubuntu:20.04 AS bitcoind-base
RUN apt update && apt install -yqq \
curl gosu jq bash-completion

View File

@ -1,4 +1,4 @@
FROM ubuntu:18.04 AS eclair-base
FROM ubuntu:20.04 AS eclair-base
RUN apt update && apt install -yqq \
curl gosu jq bash-completion

View File

@ -8,7 +8,7 @@ RUN go get -d github.com/lightningnetwork/lnd
WORKDIR $GOPATH/src/github.com/lightningnetwork/lnd
RUN make && make install
FROM ubuntu:18.04 AS lnd-run
FROM ubuntu:20.04 AS lnd-run
RUN apt update && apt install -yqq \
curl gosu jq bash-completion

View File

@ -1,10 +1,10 @@
#!/bin/bash
echo Getting node IDs
alice_address=$(docker-compose exec -T Alice lncli -n regtest getinfo | jq .identity_pubkey)
bob_address=$(docker-compose exec -T Bob lightning-cli getinfo | jq .id)
wei_address=$(docker-compose exec -T Wei eclair-cli -s -j -p eclair getinfo| jq .nodeId)
gloria_address=$(docker-compose exec -T Gloria lncli -n regtest getinfo | jq .identity_pubkey)
alice_address=$(docker-compose exec -T Alice bash -c "lncli -n regtest getinfo | jq .identity_pubkey")
bob_address=$(docker-compose exec -T Bob bash -c "lightning-cli getinfo | jq .id")
wei_address=$(docker-compose exec -T Wei bash -c "eclair-cli -s -j -p eclair getinfo| jq .nodeId")
gloria_address=$(docker-compose exec -T Gloria bash -c "lncli -n regtest getinfo | jq .identity_pubkey")
# The jq command returns JSON strings enclosed in double-quote characters
# These will confuse the shell later, because double-quotes have special
@ -40,7 +40,7 @@ docker-compose exec -T Wei eclair-cli -p eclair connect --uri=${gloria_address}@
docker-compose exec -T Wei eclair-cli -p eclair open --nodeId=${gloria_address} --fundingSatoshis=1000000
echo Get 10k sats invoice from Gloria
gloria_invoice=$(docker-compose exec -T Gloria lncli -n regtest addinvoice 10000 | jq .payment_request )
gloria_invoice=$(docker-compose exec -T Gloria bash -c "lncli -n regtest addinvoice 10000 | jq .payment_request")
# Remove quotes
gloria_invoice=${gloria_invoice//\"}

View File

@ -10,3 +10,107 @@ Relevant questions to answer:
* donation addrs
* keysend
* custom data
=== What information does an invoice contain?
A Lightning Network invoice is a request for payment issued by the receiver and contains all the information the sender needs to successfully execute the payment.
Usually it will be in the form of a QR code or an alphanumeric string that looks something like this:
_lnbc9150n1p05hx8upp5ug254f9nhymhu2kctm5j9qq28pvvfsqrdaj6fnxzhln023vyka6sdzz2pshjmt9de6zqen0wgsrjvf4ypcxj7r9d3ejqct5ypekzar0wd5xjuewwpkxzcm99cxqzjccqp2sp5k8nxp5jy26c00ny8asampc03z2edl3z784d80hz873g4jkkuqtvqrzjqgmkp5859l5tn0h6rlal5d44vlkl9r6hf03v6e3pnumr96rak85jqztsugqqkvcqqqqqqquyqqqqqqgq9q9qy9qsqwar8ak9hh4cu3evy6z0nzwpq7ax6mdums6utatejnzak78a9vfyq4ya9gnwsquaq5e257qc3fw2tdxqyk2k9fzgmldfd3urskyuzxmqpyy8tke_
Invoice encoding and decoding is defined by BOLT #11
footnote:[BOLT11 Github: https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md].
The above string is composed of two sections, split by a seperator.
The first part, _lnbc9150n_, is the human-readable part of the invoice.
The _lnbc_ tells us that the invoice is for Lightning Network Bitcoin
(it could be for Lightning Testnet or a different cryptocurrency).
The _9150n_ tells us the invoice is for 915 satoshis (expressed here as 91500 millisatoshis).
The last _1_ character in the string indicates the end of the human-readable section.
Everything after the _1_ is the data part and contains the following information:
* *Destination*: the ID of the node receiving the payment.
* *Timestamp*: the date and time the invoice was created, measured in seconds past since 1970.
* *Payment Hash*: the hash of the payment pre-image. Pre-images were discussed in the earlier chapter on Routing.
* *Expiry Time*: the amount of time, in seconds, after which the invoice expires and can no longer be paid.
* *CLTV Delta*: the delta to be used in the final HTLC in the path. Discussed in the earlier chapter on Routing.
* *Signature*: a digital signature by the invoice issuer. If anything in the invoice is changed, the signature check will fail and the invoice will no longer be valid. This prevents attackers tampering with invoices.
* _(Optional)_ *Description*: a human-readable explanation of what the payment is for.
* _(Optional)_ *Backup Bitcoin Address*: an on-chain payment address in case payment of the invoice fails.
* _(Optional)_ *Routing Hints*: to assist the payer in finding a path for the payment. Discussed in the earlier chapter on Path Finding.
An invoice also contains other useful information.
In the next section, we'll break down the above invoice and identify each individual part.
==== Anatomy of a Lightning Invoice
If we enter the above invoice into a invoice decoding tool, such as https://lndecode.com/, we get the following output:
* *Network*: bitcoin mainnet
* *Amount*: 0.00000915 BTC
* *Date*: Sun, 30 Aug 2020 12:18:04 GMT
* *Payment Hash*: e2154aa4b3b9377e2ad85ee922800a3858c4c0036f65a4ccc2bfe6f54584b775
* *Description*: Payment for 915 pixels at satoshis.place.
* *Expiration Time*: 600 seconds
* *Min Final CLTV Expiry*: 10
* *Payment Secret*: b1e660d24456b0f7cc87ec3bb0e1f112b2dfc45e3d5a77dc47f451595adc02d8
* *Routing Info*:
** _Public Key_: 023760d0f42fe8b9befa1ffbfa36b567edf28f574be2cd66219f3632e87db1e920
** _Short Channel Id_: 0970e2000b330000
** _Fee Base Msat_: 900
** _Fee Proportional Millimonths_: 1
** _CLTV Expiry Delta_: 40
* *Feature Bits*: 00101000001000000000
* *Signature*:
** _R value_: 77467ed8b7bd71c8e584d09f313820f74dadb79b86b8beaf3298bb6f1fa56248
** _S value_: 0a93a544dd0073a0a6554f03114b94b69804b2ac54891bfb52d8f070b138236c
** _Recovery Flag_: 1
* *Signing Data*: 6c6e6263393135306e0be9731f810d388552a92cee4ddf8ab617ba48a0028e16313000dbd9693330aff9bd51612ddd41a212830bcb6b2b73a103337b9101c989a903834bc32b6399030ba1039b0ba37b9b434b997383630b1b2970600a58c002a806963ccc1a488ad61ef990fd87761c3e22565bf88bc7ab4efb88fe8a2b2b5b805b00314808dd8343d0bfa2e6fbe87fefe8dad59fb7ca3d5d2f8b3598867cd8cba1f6c7a48025c388002ccc000000000e100000000400a02808504000
* *Checksum*: yy8tke
=== What are some unique things that can be done with LN?
**Micropayments**: The current financial system in most countries is divisible to a certain extent and not lower (E.g. $1 = 100c).
However it is usually not viable to send small amounts, e.g. $1 and less, due to transaction fees and other friction in the system.
Bitcoin has similar issues due to transaction fees, and fees are likely to increase in the long-term.
The Lightning Network can reasonably accommodate payments of the value of 1 satoshi i.e. one hundred millionth of a Bitcoin.
Even at an obscenely high Bitcoin value of $1m per Bitcoin, this would still allow the transfer of 1 US cent worth of value.
As many Lightning implementations track values to the thousandth of a Satoshi (i.e. one milli-satoshi), payments could conceivably be even smaller than this.
This would allow for micropayment business models such as "pay-per-article", which are not viable in the current system.
**Anonymous Payments**: Bitcoin is pseudonymous at best and transactions are permanently stored on the public Bitcoin blockchain.
Hence there is always a risk that transactions can be linked back to users post-hoc.
Technologies like CoinJoin and Pay-to-EndPoint can assist in giving Bitcoin users a greater degree of anonymity but cannot completely solve this problem.
In contrast, users of the Lightning Network are not aware of other users' payments and, since channels can be private, they may not even be aware of other users' channels.
Users are only aware of other users' payments insofar as they assist in routing payments; in this case they are unaware of both the source and the destination of the payment.
As such, the Lightning Network has a strong use case for anonymous purchases.
This would be of particular benefit to online stores and exchanges that accept Bitcoin as malicious attackers can monitor their addresses on the Bitcoin network to try and determine how much bitcoin the businesses owns; something that is not possible on the Lightning Network
footnote:[One variant of this is called a "dust attack", whereby an attacker can send a very small amount of Bitcoin (called a "dust output") to an address it knows is owned by a store or exchange.
By monitoring where this small amount of bitcoin moves, it can determine which other addresses the exchange to store owns.
This kind of attack is not possible on the Lightning Network].
**Multiplayer Games**: Lightning Payments can be integrated into online and collaborative games.
One example of this is Satoshi's Place, an online billboard where users can pay 1 satoshi to paint 1 pixel on a million pixel canvas.
What emerges is a constantly changing picture where anyone add, remove, or paint over by paying.
This example can be extended to many other kinds of collaborative games where users can pay to participate.
The Lightning Network can also be implemented directly into online games, such as MMORPGs, to facilitate in-game transactions.
As Lightning wallets and Lightning invoices can be built directly into the games themselves, this completely bypasses the need for the credit cards and the traditional financial system.
While all of this is technically possible on Bitcoin, confirmation times and fees make this unfeasible.
Transactions are confirmed on average every ten minutes, although it could potentially take even longer.
This exposes the merchant to the risk of accepting unconfirmed transactions.
Lightning transactions, on the other hand, settle instantly and so are better from a user experience standpoint for.
**Earning "interest" on Bitcoin trustlessly**
While Bitcoin may increase or decrease in value in terms of fiat currencies, it is an asset that does not offer a return in and off itself simply by holding it.
The amount of Bitcoin one holds remains constant, and actually decreases as one moves it around due to transaction fees.
Those wishing to earn a return on their holdings in Bitcoin terms can do so by opening channels and routing payments in return for routing fees.
In this way, users can earn a return (i.e. "interest") by locking their Bitcoin into channels and offering liquidity to other users wishing to transact on the Lightning Network.
Users doing so will need to pay the fees to open and close channels, as well as the cost of maintaining any hardware to run a Lightning Node.
However, as channels can be left open indefinitely, they could earn a profit as long as there are sufficient users of the Lightning Network such that their routing fees are in excess of their channel fees and maintenance costs over the long term.
This is trustless as users do not need to loan or send anyone their Bitcoin; they only need to take the risks of operating a Lightning node and storing Bitcoin in a hot wallet.

View File

@ -33,6 +33,14 @@ Some additional definitions, to be cleaned up and moved into alphabetic order ar
address::
A Bitcoin address looks like +1DSrfJdB2AnWaFNgSbv3MZC2m74996JafV+. It consists of a string of letters and numbers. It's really an encoded base58check version of a public key 160-bit hash. Just as you ask others to send an email to your email address, you would ask others to send you bitcoin to one of your Bitcoin addresses.
AMP::
Atomic Multipath Payments.
A method for payments where the sender can use more than one of their channels to forward a payment.
By default, a sender uses one channel to forward payment.
This can cause issues, for example, where a sender has an two channels with an outgoing capacity of 0.5 BTC but wishes to forward a payment of 0.8 BTC.
By default, this payment would fail without rebalancing.
With AMP, the sender can split the payment between these channels and either have the entire payment succeed or fail, with no partial payment possible.
Asymmetric Cryptographic System::
Asymmetric cryptography, or public-key cryptography, is a cryptographic system that uses pairs of keys: public keys which may be disseminated widely, and private keys which are known only to the owner.
The generation of such keys depends on cryptographic algorithms based on mathematical problems to produce one-way functions.
@ -118,14 +126,11 @@ cold storage::
Refers to keeping a reserve of bitcoin offline. Cold storage is achieved when Bitcoin private keys are created and stored in a secure offline environment. Cold storage is important for anyone with bitcoin holdings. Online computers are vulnerable to hackers and should not be used to store a significant amount of bitcoin.
Commitment Transaction::
Commitment Transactions encode the balance of the payment channel with the help of one output for each channel partner by spending the funding transaction.
When payments are being made or forwarded through the channel, a double-spend of the commitment transactions is made by creating a new pair of commitment transactions.
One output also holds a Revocable Sequence Maturity Contract which is made to disincentivize a channel partner to broadcast an old commitment transaction to the Bitcoin network.
This effectively invalidates old commitment transactions.
Broadcasting a commitment transaction forces an unilateral channel close.
Up to 483 Hashed Time Lock Contracts can be stored as additional outputs in the commitment transaction to allow the routing of payments.
In order to be able to ascribe blame in the case of unilateral channel closes, each channel partner has a slightly different commitment transaction.
// TODO probably don't explain the difference with the RSMC here
A commitment transaction is a Bitcoin transaction, signed by both channel partners, that encodes the latest balance of a channel.
Every time a new transaction is made or forwarded using the channel, the channel balance will update, and a new commmitment transaction will be signed by both parties.
Importantly, for a channel between Alice and Bob, both Alice and Bob keep their own version of the commitment transaction, that is also signed by the other party.
At any point, the channel can be closed by either Alice or Bob if they submit their commitment transaction to the Bitcoin blockchain.
Submitting an older (outdated) commitment transaction is considered "cheating" (i.e. protocol breach) in the Lightning network and can be penalized by the other party claiming all the funds in the channel for themselves.
computationally easy::
A problem is considered to be computationally easy if there exists an algorithm that is able to compute the solution to the problem rather quickly.
@ -182,10 +187,9 @@ ephemeral key::
Even if an ephemeral key leaks, only information about a single payment becomes public.
fees::
The sender of a transaction often includes a fee to the network for processing the requested transaction.
Not to be confused with a routing fee for payments on the Lightning Network.
Nodes on the Lightning Network are allowed to take a routing fee for forwarding payments.
The routing fee is the sum of a fixed _base_fee_ and a _fee_rate_ which depends on the payment amount.
In the context of Bitcoin, the sender of a transaction includes a fee paid to miners for including the transaction in a block.
In the context of the Lightning Network, nodes will charge routing fees for forwarding other users' payments.
Individual nodes can set their own fee policies which will be calculated as the sum of a fixed _base_fee_ and a _fee_rate_ which depends on the payment amount.
funding transaction::
The funding transaction is used to open a payment channel.
@ -240,6 +244,12 @@ invoice::
Invoices include the payment hash, the amount, a description and the expiry time.
Invoices can also include a fallback Bitcoin address to which the payment can be made in case no route can be found, as well as hints for routing a payment through a private channel.
JIT Routing::
"Just in Time" Routing.
An alternative to source-based routing first proposed by co-author René Pickhardt.
With JIT routing, intermediary nodes along a path can pause an in-flight payment to rebalance their channels.
This might allow them to successfully forward payments that might otherwise have failed due to lack of outgoing capacity.
Lightning message::
A Lightning message is an encrypted data string that can be sent between two peers on the Lightning Network. Similar to other communication protocols Lightning messages consist of a header and a body. The header and the body have their own HMAC. This ensures that the headers of fixed length will also be encrypted and adversaries won't be able to figure out what messages are being sent by inspecting the length.
@ -247,10 +257,13 @@ Lightning Network, Lightning Network Protocol, Lightning Protocol::
The Lightning Network is a protocol on top of Bitcoin (or other cryptocurrencies).
It creates a network of payment channels which enables the trustless forwarding of payments through the network with the help of HTLCs and Onion Routing.
Other components of the Lightning Network are the gossip protocol, the transport layer, and payment requests.
The source code is availble at https://github.com/lightningnetwork.
The source code is available at https://github.com/lightningnetwork.
Lightning Network Node, Lightning Node, node::
TBD.
Lightning Network Node, Lightning Node::
A participant on the Lightning Network.
A Lightning user will run Lightning node software in order to interact with other Lightning nodes.
Lightning nodes have the ability to open channels with other nodes, send and receive payments, and route payments from other users.
Typically a Lightning node user will also run a Bitcoin node.
lnd::
Implementation of the Lightning Network Protocol by the San Francisco based company https://lightning.engineering[Lightning Labs].
@ -277,6 +290,10 @@ Neutrino::
node::
See Lightning Network Node
network capacity::
Lightning network capacity is the value of bitcoin locked and circulated inside Lightning Network and is the sum of capacities of each channel.
It is a mesurement of the maximum value a user can transfer in Lightning Network because routing nodes will need to have sufficient balances. It also reflects the usage of Lightning Network to some extent, because the higher value is circulated inside Lightning Network the more likely that more people are using it.
Noise_XK::
The template of the Noise protocol framework to establish an authenticated and encrypted communication channel between two peers of the Lightning Network.
X means that no public key needs to be known from the initiator of the connection.
@ -469,5 +486,18 @@ upstream payment::
wallet::
Software that holds all your Bitcoin addresses and secret keys. Use it to send, receive, and store your bitcoin.
watchtower::
Watchtowers are a security service on the Lightning network that monitor channels.
In the case that one of the channel partners goes offline or loses their backup, a watchtower keep their own backups and can restore their channel information.
They also monitor the Bitcoin blockchain and can submit a penalty transaction in the case that one of the partners tries to "cheat" by broadcasting an outdated state.
Watchtowers can be run by the channel partners themselves, or as a paid service offered by a third party.
Wathctowers have no control over the funds in the channels themselves.
zombie channel::
An open channel where one of the channel partners has gone permanently offline.
Zombie channels cannot be used to route payments and have only downsides to the online partner.
Zombie channels are better off closed but they are tricky to classify as the online partner can't always be sure if the offline party will stay offline.
Some contributed definitions have been sourced under a CC-BY license from the https://en.bitcoin.it/wiki/Main_Page[Bitcoin Wiki], https://en.wikipedia.org[Wikipedia], https://github.com/bitcoinbook/bitconbook[Mastering Bitcoin] or from other open source documentation sources.

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

BIN
images/ln_port_check.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

BIN
images/probingtimes.ppm Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
images/raspiblitz.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 MiB

BIN
images/rebalancing-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

BIN
images/rebalancing-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
images/rebalancing-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

BIN
images/rebalancing-4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

BIN
images/rebalancing-5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

BIN
images/rebalancing-6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

BIN
images/rebalancing-7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

View File

@ -19,24 +19,25 @@ In this chapter you will learn how to set up each of the software packages for t
==== Using the command-line
The examples in this chapter, and more broadly in most of this book, use a command-line terminal. That means that you type commands into a terminal and receive text responses. Furthermore, the examples are demonstrated on an operating system based on the Linux kernel and GNU software system, specifically the latest long-term stable release of Ubuntu (Ubuntu 18.04 LTS). The majority of the examples can be replicated on other operating systems such as Windows or Mac OS, with small modifications to the commands. The biggest difference between operating systems is the _package manager_ which installs the various software libraries and their pre-requisites. In the given examples, we will use +apt+, which is the package manager for Ubuntu. On Mac OS, a common package manager used for open source development is Homebrew (command +brew+) found at https://brew.sh.
The examples in this chapter, and more broadly in most of this book, use a command-line terminal. That means that you type commands into a terminal and receive text responses. Furthermore, the examples are demonstrated on an operating system based on the Linux kernel and GNU software system, specifically the latest long-term stable release of Ubuntu (Ubuntu 20.04 LTS). The majority of the examples can be replicated on other operating systems such as Windows or Mac OS, with small modifications to the commands. The biggest difference between operating systems is the _package manager_ which installs the various software libraries and their pre-requisites. In the given examples, we will use +apt+, which is the package manager for Ubuntu. On Mac OS, a common package manager used for open source development is Homebrew (command +brew+) found at https://brew.sh.
In most of the examples here, we will be building the software directly from the source code. While this can be quite challenging, it gives us the most power and control. You may choose to use docker containers, pre-compiled packages or other installation mechanisms instead if you get stuck!
[TIP]
====
((("$ symbol")))((("shell commands")))((("terminal applications")))In many of the examples in this chapter we will be using the operating system's command-line interface (also known as a "shell"), accessed via a "terminal" application. The shell will display a prompt; you type a command; and the shell responds with some text and a new prompt for your next command. The prompt may look different on your system, but in the following examples it is denoted by a +$+ symbol. In the examples, when you see text after a +$+ symbol, don't type the +$+ symbol but type the command immediately following it, then press Enter to execute the command. In the examples, the lines below each command are the operating system's responses to that command. When you see the next +$+ prefix, you'll know it's a new command and you should repeat the process.
((("$ symbol")))((("shell commands")))((("terminal applications")))In many of the examples in this chapter we will be using the operating system's command-line interface (also known as a "shell"), accessed via a "terminal" application. The shell will first display a prompt as an indicator that it is ready for your command. Then you type a command and press "Enter" to which the shell responds with some text and a new prompt for your next command. The prompt may look different on your system, but in the following examples it is denoted by a +$+ symbol. In the examples, when you see text after a +$+ symbol, don't type the +$+ symbol but type the command immediately following it. Then press the Enter key to execute the command. In the examples, the lines below each command are the operating system's responses to that command. When you see the next +$+ prefix, you'll know it is a new command and you should repeat the process.
====
To keep things consistent, we use the +bash+ shell in all command-line examples. While other shells will behave in a similar way, and you will be able to run all the examples without it, some of the shell scripts are written specifically for the +bash+ shell and may require some changes or customization to run in another shell. For consistency, you can install the +bash+ shell on Windows and Mac OS, and it comes installed by default on most Linux systems.
To keep things consistent, we use the +bash+ shell in all command-line examples. While other shells will behave in a similar way, and you will be able to run all the examples without it, some of the shell scripts are written specifically for the +bash+ shell and may require some changes or customizations to run in another shell. For consistency, you can install the +bash+ shell on Windows and Mac OS, and it comes installed by default on most Linux systems.
==== Donwloading the book repository
==== Downloading the book repository
All the code examples are available in the book's repository. The repository will be kept up-to-date, as much as possible, so you should always look for the latest version in the repository, instead of copying it from the printed book or ebook version of this test.
All the code examples are available in the book's online repository. Because the repository will be kept up-to-date as much as possible, you should always look for the latest version in the online repository, instead of copying it from the printed book or the ebook.
You can download the repository as a ZIP bundle by visiting +github.com/lnbook/lnbook+ and selecting the "Clone or Download" green button on the right.
You can download the repository as a ZIP bundle by visiting +github.com/lnbook/lnbook+ and selecting the green "Clone or Download" button on the right.
Alternatively, you can use the +git+ command, to create a version-controlled clone of the repository on your local computer. Git is a distributed version control system that is used by most developers to collaborate on software development and track changes to software repositories. Donwload and install +git+ by following the instructions on https://git-scm.com/.
Alternatively, you can use the +git+ command to create a version-controlled clone of the repository on your local computer. Git is a distributed version control system that is used by most developers to collaborate on software development and track changes to software repositories. Donwload and install +git+ by following the instructions on https://git-scm.com/.
To make a local copy of the repository on your computer, run the git command as follows:
@ -50,13 +51,13 @@ You now have a complete copy of the book repository in a folder called +lnbook+.
=== Docker Containers
Many developers use a _container_, which is a type of virtual machine, to install a pre-configured operating system and application with all the necessary dependencies. Much of the Lightning software can also be installed using a container system such as _Docker_ (command +docker+) found at https://docker.com. Container installations are a lot easier, especially for those who are not used to a command-line environment.
Many developers use a _container_, which is a type of virtual machine, to install a pre-configured operating system and applications with all the necessary dependencies. Much of the Lightning software can also be installed using a container system such as _Docker_ (command +docker+) found at https://docker.com. Container installations are a lot easier, especially for those who are not used to a command-line environment.
The book's repository contains a collection of docker containers that can be used to set up a consistent development environment to practice and replicate the examples on any system. Because the container is a complete operating system that runs with a consistent configuration, you can be sure that the examples will work on your computer and not worry about dependencies, library versions or differences in configuration.
The book's repository contains a collection of docker containers that can be used to set up a consistent development environment to practice and replicate the examples on any system. Because the container is a complete operating system that runs with a consistent configuration, you can be sure that the examples will work on your computer and need not worry about dependencies, library versions or differences in configuration.
Docker containers are often optimized to be small (less disk space). However, in this book we are using containers to _standardize_ the environment and make it consistent for all readers. Furthermore, these containers are not meant to be used to run services in the background. Instead, they are meant to be used to test the examples and learn by interacting with the software. For these reasons, the containers are quite large and come with a lot of development tools and utilities. Also, the containers are built on Ubuntu, instead of the Alpine distribution (more commonly used for Linux containers), as we want to work with a distribution that is familiar to many developers rather than one that is lightweight.
Docker containers are often optimized to be small, i.e. occupy the minimum disk space. However, in this book we are using containers to _standardize_ the environment and make it consistent for all readers. Furthermore, these containers are not meant to be used to run services in the background. Instead, they are meant to be used to test the examples and learn by interacting with the software. For these reasons, the containers are quite large and come with a lot of development tools and utilities. Commonly the Alpine distribution is used for Linux containers due to their reduced size. Nonetheless, we provide containers built on Ubuntu because more developers are familiar with Ubuntu, and this familiarity is more important to us than size.
You can find the latest container definitions and build configurations in the book's repository under the +code/docker+ folder. Each container is in a separate folder beneath:
You can find the latest container definitions and build configurations in the book's repository under the +code/docker+ folder. Each container is in a separate folder as can be seen below:
//// $ tree -F --charset=asciii code
[docker-dir-list]
@ -107,20 +108,20 @@ code
==== Installing Docker
Before we begin, you should install the docker container system on your computer. Docker is an open system that is distributed for free as a _Community Edition_, for many different operating systems including Windows, Mac OS and Linux. The Windows and Mac versions are called _Docker Desktop_, which is GUI desktop application and command-line tools, and the Linux version is called _Docker Engine_, which is a server daemon and command-line tools. We will be using the command-line tools, which are identical across all platforms.
Before we begin, you should install the docker container system on your computer. Docker is an open system that is distributed for free as a _Community Edition_ for many different operating systems including Windows, Mac OS and Linux. The Windows and Mac versions are called _Docker Desktop_ and consist of a GUI desktop application and command-line tools. The Linux version is called _Docker Engine_ and is comprised of a server daemon and command-line tools. We will be using the command-line tools, which are identical across all platforms.
Go ahead and install Docker for your operating system by following the instructions to _"Get Docker"_ from the Docker website found here:
https://docs.docker.com/get-docker/
Select your operating system from the list, and follow the instructions to install.
Select your operating system from the list and follow the installation instructions.
[TIP]
====
If you install on Linux, follow the post-installation instructions to ensure you can run Docker as a regular user instead of root. Otherwise, you will need to prefix the +docker+ command with +sudo+, running it as root like: +sudo docker+.
If you install on Linux, follow the post-installation instructions to ensure you can run Docker as a regular user instead of user _root_. Otherwise, you will need to prefix all +docker+ commands with +sudo+, running them as root like: +sudo docker+.
====
Once you have Docker installed, you can test your installation by running the demo container +hello-world+, like this:
Once you have Docker installed, you can test your installation by running the demo container +hello-world+ like this:
[docker-hello-world]
----
@ -134,7 +135,7 @@ This message shows that your installation appears to be working correctly.
==== Basic docker commands
In this chapter we use docker quite extensively. We will be using the following docker commands and arguments:
In this chapter we use +docker+ quite extensively. We will be using the following +docker+ commands and arguments:
*Building a container*
@ -158,11 +159,11 @@ docker run -it [--network netname] [--name cname] tag
docker exec cname command
----
...where +cname+ is the name we gave the container in the run command, and +command+ is an executable or script that we want to run inside the container.
...where +cname+ is the name we gave the container in the +run+ command, and +command+ is an executable or script that we want to run inside the container.
*Stopping a container*
In most cases, if we are running a container in an _interactive_ and _terminal_ mode, with the +i+ and +t+ flags (combined as +-it+), the container can be stopped by simply pressing +CTRL-C+, or exiting the shell with +exit+ or +CTRL-D+. If the container does not exit, you can stop it from another terminal, like this:
In most cases, if we are running a container in an _interactive_ as well as _terminal_ mode, i.e. with the +i+ and +t+ flags (combined as +-it+) set, the container can be stopped by simply pressing +CTRL-C+ or by exiting the shell with +exit+ or +CTRL-D+. If a container does not terminate, you can stop it from another terminal like this:
----
docker stop cname
@ -172,7 +173,7 @@ docker stop cname
*Deleting a container by name*
If you name a container, instead of letting docker name it randomly, you cannot use that name again until the container is deleted. Docker will return an error like this:
If you name a container instead of letting docker name it randomly, you cannot reuse that name until the container is deleted. Docker will return an error like this:
----
docker: Error response from daemon: Conflict. The container name "/bitcoind" is already in use...
----
@ -183,7 +184,7 @@ To fix this, delete the existing instance of the container:
docker rm cname
----
...where +cname+ is the name we have the container (+bitcoind+ in the example error message)
...where +cname+ is the name assigned to the container (+bitcoind+ in the example error message)
*List running containers*
@ -191,7 +192,7 @@ docker rm cname
docker ps
----
These basic docker commands will be enough to get you started and will allow you to run all the examples in this chapter. Let's see them in action, in the following sections.
These basic docker commands will be enough to get you started and will allow you to run all the examples in this chapter. Let's see them in action in the following sections.
=== Bitcoin Core and Regtest
@ -199,19 +200,19 @@ Most of the Lightning node implementations need access to a full Bitcoin node in
Installing a full Bitcoin node and synching the Bitcoin blockchain is outside the scope of this book and is a relatively complex endeavor in itself. If you want to try it, refer to _Mastering Bitcoin_ (https://github.com/bitcoinbook/bitcoinbook), "Chapter 3: Bitcoin Core: The Reference Implementation" which discusses the installation and operation of a Bitcoin node.
A Bitcoin node can also be operated in _regtest_ mode, where the node creates a local simulated Bitcoin blockchain for testing purposes. In the following examples, we will be using regtest mode to allow us to demonstrate lightning without having to synchronize a Bitcoin node, or risk any funds.
A Bitcoin node can be operated in _regtest_ mode, where the node creates a local simulated Bitcoin blockchain for testing purposes. In the following examples we will be using the +regtest+ mode to allow us to demonstrate Lightning without having to synchronize a Bitcoin node or risk any funds.
The container for Bitcoin Core is bitcoind that runs Bitcoin Core in regtest mode and mines a new block every 10 seconds. It's RPC port is exposed on port 18443 and accessible for RPC calls with the username regtest and the password regtest. You can also access it with an interactive shell and run +bitcoin-cli+ commands locally.
The container for Bitcoin Core is +bitcoind+. It is configured to run Bitcoin Core in +regtest+ mode and to mine a new block every 10 seconds. Its RPC port is exposed on port 18443 and accessible for RPC calls with the username +regtest+ and the password +regtest+. You can also access it with an interactive shell and run +bitcoin-cli+ commands locally.
===== Building the Bitcoin Core Container
Let's start by building and running the bitcoind container. First, we use the +docker build+ command to build it:
Let us start by building and running the +bitcoind+ container. First, we use the +docker build+ command to build it:
----
$ cd code/docker
$ docker build -t lnbook/bitcoind bitcoind
Sending build context to Docker daemon 12.29kB
Step 1/25 : FROM ubuntu:18.04 AS bitcoind-base
Step 1/25 : FROM ubuntu:20.04 AS bitcoind-base
---> c3c304cb4f22
Step 2/25 : RUN apt update && apt install -yqq curl gosu jq bash-completion
@ -226,7 +227,7 @@ Successfully tagged lnbook/bitcoind:latest
===== Running the Bitcoin Core Container
Next, let's run the bitcoind container and have it mine some blocks. We use the +docker run+ command, with the flags for _interactive (i)_ and _terminal (t)_, and the +name+ argument to give the running container a custom name:
Next, let's run the +bitcoind+ container and have it mine some blocks. We use the +docker run+ command, with the flags for _interactive (i)_ and _terminal (t)_, and the +name+ argument to give the running container a custom name:
----
$ docker run -it --name bitcoind lnbook/bitcoind
@ -252,13 +253,13 @@ Mining 1 block every 10 seconds
Balance: 100.00000000
----
As you can see, bitcoind starts up and mines 101 blocks to get the chain started. This is because under the bitcoin consensus rules, newly mined bitcoin is not spendable until 100 blocks have elapsed. By mining 101 blocks, we make the 1st block's coinbase spendable. After that initial mining activity, we mine a new block every 10 seconds, to keep the chain moving forward.
As you can see, bitcoind starts up and mines 101 simulated blocks to get the chain started. This is because under the bitcoin consensus rules, newly mined bitcoin is not spendable until 100 blocks have elapsed. By mining 101 blocks, we make the first block's coinbase spendable. After that initial mining activity, a new block is mined every 10 seconds to keep the chain moving forward.
For now, there are no transactions. But we now have some test bitcoin that has been mined in the wallet and is available to spend. When we connect some Lightning nodes to this chain, we will send some bitcoin to their wallets so that we can open some Lightning channels between the Lightning nodes.
For now, there are no transactions. But we have some test bitcoin that has been mined in the wallet and is available to spend. When we connect some Lightning nodes to this chain, we will send some bitcoin to their wallets so that we can open some Lightning channels between the Lightning nodes.
===== Interacting with the Bitcoin Core Container
In the mean time, we can also interact with the bitcoind container by sending it shell commands. The container is sending a log file to the terminal, displaying the mining process of the bitcoind process. To interact with the shell we can issue commands in another terminal, using the +docker exec+ command. Since we previously named the running container with the +name+ argument, we can refer to it with that name when we run the +docker exec+ command. First, let's run an interactive +bash+ shell:
In the mean time, we can also interact with the +bitcoind+ container by sending it shell commands. The container is sending a log file to the terminal, displaying the mining process of the bitcoind process. To interact with the shell we can issue commands in another terminal, using the +docker exec+ command. Since we previously named the running container with the +name+ argument, we can refer to it by that name when we run the +docker exec+ command. First, let's run an interactive +bash+ shell:
----
$ docker exec -it bitcoind /bin/bash
@ -272,9 +273,9 @@ root@e027fd56e31a:/bitcoind# ps x
root@e027fd56e31a:/bitcoind#
----
Running the interactive shell puts us "inside" the container and logged in as the +root+ user, as we can see from the new shell prompt +root@e027fd56e31a:/bitcoind#+. If we issue the +ps x+ command to see what processes are running, we see both bitcoind and the script +mine.sh+ are running in the background. To exit this shell, type +CTRL-D+ or +exit+ and you will be returned to your operating system prompt.
Running the interactive shell puts us "inside" the container. It logs in as user +root+, as we can see from the prefix +root@+ in the new shell prompt +root@e027fd56e31a:/bitcoind#+. If we issue the +ps x+ command to see what processes are running, we see both +bitcoind+ and the script +mine.sh+ are running in the background. To exit this shell, type +CTRL-D+ or +exit+ and you will be returned to your operating system prompt.
Instead of running an interactive shell, we can also issue a single command that is executed inside the container, for example to run the +bitcoin-cli+ command, like this:
Instead of running an interactive shell, we can also issue a single command that is executed inside the container. In the following example we run the +bitcoin-cli+ command to obtain information about the current blockchain state:
----
$ docker exec bitcoind bitcoin-cli -datadir=/bitcoind getblockchaininfo
@ -290,18 +291,18 @@ $ docker exec bitcoind bitcoin-cli -datadir=/bitcoind getblockchaininfo
$
----
As you can see, we need to tell +bitcoin-cli+ where the bitcoind data directory is, with the +datadir+ argument. We can then issue RPC commands to the Bitcoin Core node and get JSON encoded results.
As you can see, we need to tell +bitcoin-cli+ where the bitcoind data directory is by using the +datadir+ argument. We can then issue RPC commands to the Bitcoin Core node and get JSON encoded results.
All the docker containers also have +jq+ installed, which is a command-line JSON encoder/decoder, to help us process JSON on the command-line or from inside scripts. You can send the JSON output of any command to +jq+ using the +|+ character ("pipe" notation). For example, if we pipe the +getblockchaininfo+ JSON result we got above, we can extract the specific field +blocks+ like this:
All our docker containers have a command-line JSON encoder/decoder named +jq+ preinstalled. +jq+ helps us to process JSON-formatted data via the command-line or from inside scripts. You can send the JSON output of any command to +jq+ using the +|+ character. This character as well as this operation is called a "pipe". Let's apply a +pipe+ and +jq+ to the previous command as follows:
----
$ docker exec bitcoind bitcoin-cli -datadir=/bitcoind getblockchaininfo | jq .blocks
$ docker exec bitcoind bash -c "bitcoin-cli -datadir=/bitcoind getblockchaininfo | jq .blocks"
189
----
The +jq+ JSON decoder extract the result "189" from the +getblockchaininfo+, which we could use in a subsequent command.
+jq .blocks+ instructs the +jq+ JSON decoder to extract the field +blocks" from the +getblockchaininfo+ result. In our case, it extracts and prints the value of 189 which we could use in a subsequent command.
As you will see in the following sections, we can run several containers and then interact with them individually, issuing commands to extract information (such as the Lightning node public key), or to take an action (open a Lightning channel to another node). The +docker run+ and +docker exec+, together with +jq+ for JSON decoding are all we need to build a working Lightning Network that mixes many different node implementations and allows us to try out various experiments, all on our own computer.
As you will see in the following sections, we can run several containers at the same time and then interact with them individually. We can issue commands to extract information such as the Lightning node public key or to take actions such as opening a Lightning channel to another node. The +docker run+ and +docker exec+ commands together with +jq+ for JSON decoding are all we need to build a working Lightning Network that mixes many different node implementations. This enables us to try out diverse experiments on our own computer.
=== The c-lightning Lightning node project
@ -315,7 +316,7 @@ In the following sections, we will build a docker container that runs a c-lightn
The c-lightning software distribution has a docker container, but it is designed for running c-lightning in production systems and along side a bitcoind node. We will be using a somewhat simpler container configured to run c-lightning for demonstration purposes.
We start by building the c-lightning docker container, from the book's files which you previously downloaded into a directory named +lnbook+. As before, we will use the +docker build+ command, in the +code/docker+ sub-directory. We will tag the container image with the tag +lnbook/c-lightning+, like this:
We start by building the c-lightning docker container from the book's files which you previously downloaded into a directory named +lnbook+. As before, we will use the +docker build+ command in the +code/docker+ sub-directory. We will tag the container image with the tag +lnbook/c-lightning+ like this:
----
$ cd code/docker
@ -334,16 +335,16 @@ Successfully built e63f5aaa2b16
Successfully tagged lnbook/c-lightning:latest
----
Our container is now built and ready to run. However, before we run the c-lightning container, we need to start the bitcoind container in another terminal, as c-lightning depends on bitcoind. We will also need to set up a docker network that allows the containers to connect to each other, as if they are on the same local area network.
Our container is now built and ready to run. However, before we run the c-lightning container, we need to start the bitcoind container in another terminal as c-lightning depends on bitcoind. We will also need to set up a docker network that allows the containers to connect to each other as if residing on the same local area network.
[TIP]
====
Docker containers can "talk" to each other over a virtual local-area network managed by the docker system. Each container can also have a custom name and other containers can use that name to resolve its IP address and easily connect to it.
Docker containers can "talk" to each other over a virtual local area network managed by the docker system. Each container can have a custom name and other containers can use that name to resolve its IP address and easily connect to it.
====
==== Setting up a docker network
Once a docker network is set up, docker will keep it running on our local computer every time docker starts, for example after rebooting. So we only need to set up a network once, using the +docker network create+ command. The network name itself is not important, but has to be unique on our computer. By default, docker has three networks named +host+, +bridge+, and +none+. We will name our new network +lnbook+ and create it like this:
Once a docker network is set up, docker will activate the network on our local computer every time docker starts, e.g. after rebooting. So we only need to set up a network once by using the +docker network create+ command. The network name itself is not important, but it has to be unique on our computer. By default, docker has three networks named +host+, +bridge+, and +none+. We will name our new network +lnbook+ and create it like this:
----
$ docker network create lnbook
@ -360,13 +361,13 @@ As you can see, running +docker network ls+ gives us a listing of the docker net
==== Running the bitcoind and c-lightning containers
Let's start the bitcoind and c-lightning containers and connect them to the +lnbook+ network. To run a container in a specific network, we must pass the +network+ argument to +docker run+. To make it easy for containers to find each other, we will also give each one a name with the +name+ argument. We start bitcoind like this:
The next step is to start the bitcoind and c-lightning containers and connect them to the +lnbook+ network. To run a container in a specific network, we must pass the +network+ argument to +docker run+. To make it easy for containers to find each other, we will also give each one a name with the +name+ argument. We start bitcoind like this:
----
$ docker run -it --network lnbook --name bitcoind lnbook/bitcoind
----
You should see bitcoind start up and start mining blocks every 10 seconds. Leave it running and open a new terminal window to start c-lightning. We use a similar +docker run+ command, with the +network+ and +name+ arguments to start c-lightning, like this:
You should see bitcoind start up and start mining blocks every 10 seconds. Leave it running and open a new terminal window to start c-lightning. We use a similar +docker run+ command with the +network+ and +name+ arguments to start c-lightning as follows:
----
$ docker run -it --network lnbook --name c-lightning lnbook/c-lightning
@ -382,9 +383,9 @@ Funding c-lightning wallet
----
The c-lightning container starts up and connects to the bitcoind container over the docker network. First, our c-lightning node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet. Finally, as part of the container startup, a script will send an RPC command to the bitcoind node, creating a transaction that funds the c-lightning wallet with 10 test BTC. Our c-lightning node is not only running, but it has some bitcoin to play with!
The c-lightning container starts up and connects to the bitcoind container over the docker network. First, our c-lightning node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet. Finally, as part of the container startup, a script will send an RPC command to the bitcoind node which creates a transaction that funds the c-lightning wallet with 10 test BTC. Now our c-lightning node is not only running, but it even has some test bitcoin to play with!
As we demonstrated with the bitcoind container, we can issue commands to our c-lightning container in another terminal, to extract information, open channels etc. The command that allows us to issue command-line instructions to the c-lightning node is called +lightning-cli+. Let's get the node info, in another terminal window, using the +docker exec+ command:
As we demonstrated with the bitcoind container, we can issue commands to our c-lightning container in another terminal in order to extract information, open channels etc. The command that allows us to issue command-line instructions to the c-lightning node is called +lightning-cli+. To get the node info use the following +docker exec+ command in another terminal window:
----
$ docker exec c-lightning lightning-cli getinfo
@ -421,7 +422,7 @@ $ docker exec c-lightning lightning-cli getinfo
We now have our first Lightning node running on a virtual network and communicating with a test bitcoin blockchain. Later in this chapter we will start more nodes and connect them to each other to make some Lightning payments.
In the next section we will also look at how to download, configure and compile c-lightning directly from the source code. This is an optional and advanced step that will teach you how to use the build tools and allow you to make modifications to c-lighting source code. With this knowledge, you can write some code, fix some bugs, or create a plugin for c-lightning. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
In the next section we will also look at how to download, configure and compile c-lightning directly from the source code. This is an optional and advanced step that will teach you how to use the build tools and allow you to make modifications to c-lighting source code. With this knowledge you can write some code, fix some bugs, or create a plugin for c-lightning. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
==== Installing c-lightning from source code
@ -431,7 +432,7 @@ https://github.com/ElementsProject/lightning/blob/master/doc/INSTALL.md
==== Installing prerequisite libraries and packages
The first step, as is often the case, is the installation of pre-requisite libraries. We use the +apt+ package manager to install these:
The common first step is the installation of prerequisite libraries. We use the +apt+ package manager to install these:
----
$ sudo apt-get update
@ -447,7 +448,8 @@ Reading package lists... Done
$ sudo apt-get install -y \
autoconf automake build-essential git libtool libgmp-dev \
libsqlite3-dev python python3 python3-mako net-tools zlib1g-dev \ libsodium-dev gettext
libsqlite3-dev python python3 python3-mako net-tools zlib1g-dev \
libsodium-dev gettext
Reading package lists... Done
Building dependency tree
@ -457,20 +459,20 @@ The following additional packages will be installed:
[...]
Setting up libgcc-7-dev:amd64 (7.4.0-1ubuntu1~18.04.1) ...
Setting up cpp-7 (7.4.0-1ubuntu1~18.04.1) ...
Setting up libsodium-dev:amd64 (1.0.16-2) ...
Setting up libstdc++-7-dev:amd64 (7.4.0-1ubuntu1~18.04.1) ...
Setting up libsigsegv2:amd64 (2.12-2) ...
Setting up libltdl-dev:amd64 (2.4.6-14) ...
Setting up python2 (2.7.17-2ubuntu4) ...
Setting up libsodium-dev:amd64 (1.0.18-1) ...
[...]
$
----
After a few minutes and a lot of on-screen activity, you will have installed all the necessary packages and libraries. Many of these libraries are also used by other Lightning packages and for software development in general.
After a few minutes and a lot of on-screen activity, you will have installed all the necessary packages and libraries. Many of these libraries are also used by other Lightning packages and needed for software development in general.
==== Copying the c-lightning source code
Next, we will copy the latest version of c-lightning from the source code repository. To do this, we will use the +git clone+ command, which clones a version-controlled copy onto your local machine, allowing you to keep it synchronized with subsequent changes without having to download the whole thing again:
Next, we will copy the latest version of c-lightning from the source code repository. To do this, we will use the +git clone+ command which clones a version-controlled copy onto your local machine thereby allowing you to keep it synchronized with subsequent changes without having to download the whole repository again:
----
$ git clone https://github.com/ElementsProject/lightning.git
@ -486,15 +488,16 @@ $ cd lightning
----
We now have a copy of c-lightning, cloned into the +lightning+ subfolder, and we have used the +cd+ (change directory) command to enter that subfolder.
We now have a copy of c-lightning cloned into the +lightning+ subfolder, and we have used the +cd+ (change directory) command to enter that subfolder.
==== Compiling the c-lightning source code
Next, we use a set of _build scripts_ that are commonly available on many open source projects. These are +configure+ and +make+, and they allow us to:
Next, we use a set of _build scripts_ that are commonly available in many open source projects. These _build scripts_ use the +configure+ and +make+ commandos which allow us to:
* Select the build options and check necessary dependencies (+configure+).
* Build and install the executables and libraries (+make+).
Running the +configure+ with the +help+ option will show us all the options that we can set:
Running the +configure+ with the +help+ option will show us all the available options:
----
$ ./configure --help
@ -517,7 +520,7 @@ Options include:
Compile with address-sanitizer
----
We don't need to change any of the defaults for this example, so we run +configure+ again, without any options, to set the defaults:
We don't need to change any of the defaults for this example. Hence we run +configure+ again without any options to use the defaults:
----
$ ./configure
@ -538,7 +541,7 @@ Setting TEST_NETWORK... regtest
$
----
Next, we use the +make+ command to build the libraries, components and executables of the c-lightning project. This part will take several minutes to complete and will use your computers CPU and disk aggressively, so expect some noise from the fans! Running make:
Next, we use the +make+ command to build the libraries, components, and executables of the c-lightning project. This part will take several minutes to complete and will use your computer's CPU and disk heavily. Expect some noise from the fans! Run +make+:
----
$ make
@ -551,7 +554,7 @@ cc -Og ccan-asort.o ccan-autodata.o ccan-bitmap.o ccan-bitops.o ccan-...
----
If all goes well, you will not see any +ERROR+ message stopping the execution of the above command. The c-lightning software package has been compiled from source and we are now ready to install the executable packages:
If all goes well, you will not see any +ERROR+ message stopping the execution of the above command. The c-lightning software package has been compiled from source and we are now ready to install the executable components we created in the previous step:
----
$ sudo make install
@ -568,7 +571,7 @@ install cli/lightning-cli lightningd/lightningd /usr/local/bin
[...]
----
Let's check and see if the +lightningd+ and +lightning-cli+ commands have been installed correctly, asking each for their version information:
In order to verify that the +lightningd+ and +lightning-cli+ commands have been installed correctly we will ask each executable for its version information:
----
$ lightningd --version
@ -577,19 +580,19 @@ $ lightning-cli --version
v0.8.1rc2
----
You may see a different version from that shown above, as the software continues to evolve long after this book is printed. However, no matter what version you see, the fact that the commands execute and show you version information means that you have succeeded in building the c-lightning software.
You may see a different version from that shown above as the software continues to evolve long after this book is published. However, no matter what version you see, the fact that the commands execute and respond with version information means that you have succeeded in building the c-lightning software.
=== The Lightning Network Daemon (LND) node project
The Lightning Network Daemon (LND) - is a complete implementation of a Lightning Network node by Lightning Labs. The LND project provides a number of executable applications, including +lnd+, (the daemon itself) and +lncli+ (the command-line utility). LND has several pluggable back-end chain services including btcd (a full-node), bitcoind (Bitcoin Core), and neutrino (a new experimental light client). LND is written in the Go programming language (golang). The project is open source and developed collaboratively on Github:
The Lightning Network Daemon (LND) is a complete implementation of a Lightning Network node by Lightning Labs. The LND project provides a number of executable applications, including +lnd+ (the daemon itself) and +lncli+ (the command-line utility). LND has several pluggable back-end chain services including btcd (a full-node), bitcoind (Bitcoin Core), and neutrino (a new experimental light client). LND is written in the Go programming language. The project is open source and developed collaboratively on Github:
https://github.com/LightningNetwork/lnd
In the next few sections we will build a docker container to run LND, build LND from source code and learn how to configure and run LND.
In the next few sections we will build a docker container to run LND, build LND from source code, and learn how to configure and run LND.
==== Building LND as a docker container
If you've followed the previous examples in this chapter, you should be quite familiar with the basic docker commands by now. In this section we will repeat them to build the LND container. The container is located in +code/docker/lnd+. We start in a terminal, by switching the working directory to +code/docker+ and issuing the +docker build+ command:
If you have followed the previous examples in this chapter, you should be quite familiar with the basic docker commands by now. In this section we will repeat them to build the LND container. The container is located in +code/docker/lnd+. We issue commands in a terminal to change the working directory to +code/docker+ and perform the +docker build+ command:
----
$ cd code/docker
@ -609,22 +612,22 @@ Successfully tagged lnbook/lnd:latest
----
Our container is now built and ready to run. As with the c-lightning container we built previously, the LND container also depends on a running instance of Bitcoin Core. As before, we need to start the bitcoind container in another terminal and connect LND to it via a docker network. We've already set up a docker network called +lnbook+ and will be using that again here.
Our container is now built and ready to run. As with the c-lightning container we built previously, the LND container also depends on a running instance of Bitcoin Core. As before, we need to start the bitcoind container in another terminal and connect LND to it via a docker network. We have already set up a docker network called +lnbook+ previously and will be using that again here.
[TIP]
====
A single bitcoind container can serve many many Lightning nodes. Normally, each node operator would run a Lightning node and Bitcoin node on their own server. Since we are simulating a network we can run several Lightning nodes, all connecting to a single Bitcoin node in regtest mode.
Normally, each node operator runs its own Lightning node and its own Bitcoin node on their own server. For us a single bitcoind container can serve many Lightning nodes. On our simulated network we can run several Lightning nodes, all connecting to a single Bitcoin node in regtest mode.
====
==== Running the bitcoind and LND containers
As before, we start the bitcoind container in one terminal and LND in another. If you already have the bitcoind container running, you do not need to restart it. Just leave it running and skip the next step. To start bitcoin in the +lnbook+ network, we use +docker run+, like this:
As before, we start the bitcoind container in one terminal and LND in another. If you already have the bitcoind container running, you do not need to restart it. Just leave it running and skip the next step. To start bitcoind in the +lnbook+ network we use +docker run+ like this:
----
$ docker run -it --network lnbook --name bitcoind lnbook/bitcoind
----
Next, we start the LND container we just build. We will need to attach it to the +lnbook+ network and give it a name, just as we did with the other containers:
Next, we start the LND container we just built. As done before we need to attach it to the +lnbook+ network and give it a name:
----
$ docker run -it --network lnbook --name lnd lnbook/lnd
@ -641,9 +644,9 @@ Funding lnd wallet
----
The LND container starts up and connects to the bitcoind container over the docker network. First, our LND node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet. Finally, as part of the container startup, a script will send an RPC command to the bitcoind node, creating a transaction that funds the LND wallet with 10 test BTC.
The LND container starts up and connects to the bitcoind container over the docker network. First, our LND node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet. Finally, as part of the container startup, a script will send an RPC command to the bitcoind node thereby creating a transaction that funds the LND wallet with 10 test BTC.
As we demonstrated previously, we can issue commands to our container in another terminal, to extract information, open channels etc. The command that allows us to issue command-line instructions to the +lnd+ daemon is called +lncli+. Let's get the node info, in another terminal window, using the +docker exec+ command:
As we demonstrated previously, we can issue commands to our container in another terminal in order to extract information, open channels etc. The command that allows us to issue command-line instructions to the +lnd+ daemon is called +lncli+. Let's get the node info using the +docker exec+ command in another terminal window:
----
$ docker exec lnd lncli -n regtest getinfo
@ -660,31 +663,29 @@ $ docker exec lnd lncli -n regtest getinfo
}
----
We now have another Lightning node running on the +lnbook+ network and communicating with bitcoind. If you are still running the c-lightning container, there are now two nodes running. They're not yet connected to each other, but we will be connecting them to each other soon.
We now have another Lightning node running on the +lnbook+ network and communicating with bitcoind. If you are still running the c-lightning container, then there are now two nodes running. They're not yet connected to each other, but we will be connecting them to each other soon.
If you want, you can run several LND nodes, or c-lightning nodes, or any combination of these on the same Lightning network. To run a second LND node, for example, you would issue the +docker run+ command with a different container name, like this:
If desired, you can run any combination of LND and c-lightning nodes on the same Lightning network. For example, to run a second LND node you would issue the +docker run+ command with a different container name like so:
----
$ docker run -it --network lnbook --name lnd2 lnbook/lnd
----
In the command above, we start another LND container, named +lnd2+. The names are entirely up to you, as long as they are unique. If you don't provide a name, docker will construct a unique name by randomly combining two English words, such as "naughty_einstein" (this is the actual name docker chose when we wrote this paragraph - how funny!).
In the command above, we start another LND container, naming it +lnd2+. The names are entirely up to you, as long as they are unique. If you don't provide a name, docker will construct a unique name by randomly combining two English words such as "naughty_einstein". This was the actual name docker chose for us when we wrote this paragraph. How funny!
In the next section we will also look at how to download and compile LND directly from the source code. This is an optional and advanced step that will teach you how to use the Go language build tools and allow you to make modifications to LND source code. With this knowledge, you can write some code, or fix some bugs. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
In the next section we will look at how to download and compile LND directly from the source code. This is an optional and advanced step that will teach you how to use the Go language build tools and allow you to make modifications to LND source code. With this knowledge you can write some code or fix some bugs. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
==== Installing LND from source code
In this section we will build LND from scratch. LND is written in the Go programming language (search for golang to avoid irrelevant results on the word "go"). Because it is written in Go and not C or C++, it uses a different "build" framework than the GNU autotools/make framework we saw used in c-lightning previously. Don't fret though, it is quite easy to install and use the golang tools and we will show each step here. Go is a fantastic language for collaborative software development as it produces very consistent, precise and easy to read code regardless of the number of authors. Go is focused and "minimalist" in a way that encourages consistency across versions of the language. As a compiled language, it is also quite efficient. Let's dive in.
In this section we will build LND from scratch. LND is written in the Go programming language. IF you want to find out more about Go, search for +golang+ instead of +go+ to avoid irrelevant results. Because it is written in Go and not C or C++, it uses a different "build" framework than the GNU autotools/make framework we saw used in c-lightning previously. Don't fret though, it is quite easy to install and use the golang tools and we will show each step here. Go is a fantastic language for collaborative software development as it produces very consistent, precise, and easy to read code regardless of the number of authors. Go is focused and "minimalist" in a way that encourages consistency across versions of the language. As a compiled language, it is also quite efficient. Let's dive in.
We will follow the installation instructions found on the LND project documentation:
We will follow the installation instructions found in the LND project documentation:
https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md
First, we will install the golang package and associated libraries. We need, _at minimum_ Go version 1.13 or later. The official Go language packages are distributed as binaries from https://golang.org/dl. For convenience they are also packaged as debian packages distributed through the +apt+ command. You can follow the instructions on https://golang.org/dl or use the apt commands below on a Debian/Ubuntu Linux system:
First, we will install the +golang+ package and associated libraries. We strictly require Go version 1.13 or later. The official Go language packages are distributed as binaries from https://golang.org/dl. For convenience they are also packaged as Debian packages available through the +apt+ command. You can follow the instructions on https://golang.org/dl or use the +apt+ commands below on a Debian/Ubuntu Linux system as described on https://github.com/golang/go/wiki/Ubuntu:
----
$ sudo add-apt-repository ppa:longsleep/golang-backports
$ sudo apt update
$ sudo apt install golang-go
----
@ -695,7 +696,7 @@ $ go version
go version go1.13.4 linux/amd64
----
We have 1.13.4, so we're ready to... Go! Next we need to tell any programs where to find the Go code. This is accomplished with the environment variable +GOPATH+. It doesn't matter where the GOPATH points, as long as you set it consistently. Usually it is located under the current user's home directory (referred to as +~+ in the shell). Set the +GOPATH+ and make sure your shell adds it to your executable +PATH+ like this:
We have 1.13.4, so we're ready to... Go! Next we need to tell any programs where to find the Go code. This is accomplished by setting the environment variable +GOPATH+. Usually the Go code is located in a directory named +gocode+ directly in the user's home directory. With the following two commands we consistently set the +GOPATH+ and make sure your shell adds it to your executable +PATH+. Note that the user's home directory is referred to as +~+ in the shell.
----
export GOPATH=~/gocode
@ -706,24 +707,24 @@ To avoid having to set these environment variables every time you open a shell,
==== Copying the LND source code
As with many open source projects nowadays, the source code for LND is on Github. The +go get+ command can fetch it directly using the git protocol:
As with many open source projects nowadays, the source code for LND is on Github (www.github.com). The +go get+ command can fetch it directly using the git protocol:
----
$ go get -d github.com/lightningnetwork/lnd
----
Once +git clone+ finishes, you will have a sub-directory under +GOPATH+ that contains the LND source code.
Once +go get+ finishes, you will have a sub-directory under +GOPATH+ that contains the LND source code.
==== Compiling the LND source code
LND uses the +make+ build system for convenience. To build the project, we change directory to LND's source code and then use +make+, like this:
LND uses the +make+ build system. To build the project, we change directory to LND's source code and then use +make+ like this:
----
cd $GOPATH/src/github.com/lightningnetwork/lnd
make && make install
----
After several minutes, you will have two new commands +lnd+ and +lncli+ installed. Try them out and check their version, to ensure they are installed:
After several minutes you will have two new commands +lnd+ and +lncli+ installed. Try them out and check their version to ensure they are installed:
----
$ lnd --version
@ -732,11 +733,11 @@ $ lncli --version
lncli version 0.10.99-beta commit=clock/v1.0.0-106-gc1ef5bb908606343d2636c8cd345169e064bdc91
----
You will likely see a different version from that shown above, as the software continues to evolve long after this book is printed. However, no matter what version you see, the fact that the commands execute and show you version information means that you have succeeded in building the LND software.
You will likely see a different version from that shown above, as the software continues to evolve long after this book is published. However, no matter what version you see, the fact that the commands execute and show you version information means that you have succeeded in building the LND software.
=== The Eclair Lightning node project
Eclair (French for Lightning) is a Scala implementation of the Lightning Network, made by ACINQ. Eclair is also one of the most popular and pioneering mobile Lightning wallets, which we used to demonstrate a Lightning payment in the second chapter. In this section we are examining the Eclair server project, which runs a Lightning node. Eclair is an open source project and can be found on GitHub:
Eclair (French for Lightning) is a Scala implementation of the Lightning Network made by ACINQ. Eclair is also one of the most popular and pioneering mobile Lightning wallets which we used to demonstrate a Lightning payment in the second chapter. In this section we examine the Eclair server project which runs a Lightning node. Eclair is an open source project and can be found on GitHub:
https://github.com/ACINQ/eclair
@ -745,13 +746,13 @@ In the next few sections we will build a docker container to run Eclair, as we d
==== Building Eclair as a Docker container
By this point, you are almost an expert in the basic operations of docker! In this section we will repeat many of the commands you have seen previously to build the Eclair container. The container is located in +code/docker/eclair+. We start in a terminal, by switching the working directory to +code/docker+ and issuing the +docker build+ command:
By now, you are almost an expert in the basic operations of docker! In this section we will repeat many of the previously seen commands to build the Eclair container. The container is located in +code/docker/eclair+. We start in a terminal, by switching the working directory to +code/docker+ and issuing the +docker build+ command:
----
$ cd code/docker
$ docker build -t lnbook/eclair eclair
Sending build context to Docker daemon 9.216kB
Step 1/22 : FROM ubuntu:18.04 AS eclair-base
Step 1/22 : FROM ubuntu:20.04 AS eclair-base
---> c3c304cb4f22
Step 2/22 : RUN apt update && apt install -yqq curl gosu jq bash-completion
---> Using cache
@ -770,19 +771,19 @@ Successfully tagged lnbook/eclair:latest
----
Our container is now built and ready to run. The Eclair container also depends on a running instance of Bitcoin Core. As before, we need to start the bitcoind container in another terminal and connect Eclair to it via a docker network. We've already set up a docker network called +lnbook+ and will be using that again here.
Our container is now built and ready to run. The Eclair container also depends on a running instance of Bitcoin Core. As before, we need to start the bitcoind container in another terminal and connect Eclair to it via a docker network. We have already set up a docker network called +lnbook+ and will be reusing it here.
One notable difference between Eclair and LND or c-lightning is that Eclair doesn't contain a separate bitcoin wallet, but instead relies on the bitcoin wallet in Bitcoin Core directly. For example, whereas with LND we "funded" it's bitcoin wallet by executing a transaction to transfer bitcoin from Bitcoin Core's wallet to LND's bitcoin wallet, this step is not necessary. When running Eclair, the Bitcoin Core wallet is used directly as the source of funds to open channels. As a result, the Eclair container does not contain a script to transfer bitcoin into its wallet on startup, unlike the LND or c-lightning containers.
One notable difference between Eclair and LND or c-lightning is that Eclair doesn't contain a separate bitcoin wallet but instead relies directly on the bitcoin wallet in Bitcoin Core. Recall that using LND we "funded" its bitcoin wallet by executing a transaction to transfer bitcoin from Bitcoin Core's wallet to LND's bitcoin wallet. This step is not necessary using Eclair. When running Eclair, the Bitcoin Core wallet is used directly as the source of funds to open channels. As a result, unlike the LND or c-lightning containers, the Eclair container does not contain a script to transfer bitcoin into its wallet on startup.
==== Running the bitcoind and eclair containers
==== Running the bitcoind and Eclair containers
As before, we start the bitcoind container in one terminal and the eclair container in another. If you already have the bitcoind container running, you do not need to restart it. Just leave it running and skip the next step. To start bitcoin in the +lnbook+ network, we use +docker run+, like this:
As before, we start the bitcoind container in one terminal and the Eclair container in another. If you already have the bitcoind container running, you do not need to restart it. Just leave it running and skip the next step. To start +bitcoind+ in the +lnbook+ network, we use +docker run+ like this:
----
$ docker run -it --network lnbook --name bitcoind lnbook/bitcoind
----
Next, we start the eclair container we just build. We will need to attach it to the +lnbook+ network and give it a name, just as we did with the other containers:
Next, we start the Eclair container we just built. We will need to attach it to the +lnbook+ network and give it a name, just as we did with the other containers:
----
$ docker run -it --network lnbook --name eclair lnbook/eclair
@ -799,9 +800,9 @@ INFO fr.acinq.eclair.Setup - version=0.4 commit=69c538e
----
The eclair container starts up and connects to the bitcoind container over the docker network. First, our eclair node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet.
The Eclair container starts up and connects to the bitcoind container over the docker network. First, our Eclair node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet.
As we demonstrated previously, we can issue commands to our container in another terminal, to extract information, open channels etc. The command that allows us to issue command-line instructions to the +eclair+ daemon is called +eclair-cli+. The +eclair-cli+ command expects a password, which we have set to "eclair" in this container and we will pass +eclair-cli+ that password with the +p+ flag. Let's get the node info, in another terminal window, using the +docker exec+ command:
As we demonstrated previously, we can issue commands to our container in another terminal in order to extract information, open channels etc. The command that allows us to issue command-line instructions to the +eclair+ daemon is called +eclair-cli+. The +eclair-cli+ command expects a password which we have set to "eclair" in this container. We pass the password +eclair+ to the +eclair-cli+ command via the +p+ flag. Using the +docker exec+ command in another terminal window we get the node info from Eclair:
----
$ docker exec eclair eclair-cli -p eclair getinfo
@ -818,36 +819,36 @@ $ docker exec eclair eclair-cli -p eclair getinfo
----
We now have another Lightning node running on the +lnbook+ network and communicating with bitcoind. If you want, you can run several Eclair nodes, or LND, or c-lightning nodes, or any combination of these on the same Lightning network. To run a second Eclair node, for example, you would issue the +docker run+ command with a different container name, like this:
We now have another Lightning node running on the +lnbook+ network and communicating with bitcoind. You can run any number and any combination of Lightning nodes on the same Lightning network. Any number of Eclair, LND, and c-lightning nodes can coexist. For example, to run a second Eclair node you would issue the +docker run+ command with a different container name as follows:
----
$ docker run -it --network lnbook --name eclair2 lnbook/eclair
----
In the command above, we start another Eclair container, named +eclair2+.
In the above command we start another Eclair container named +eclair2+.
In the next section we will also look at how to download and compile Eclair directly from the source code. This is an optional and advanced step that will teach you how to use the Scala and Java language build tools and allow you to make modifications to Eclair's source code. With this knowledge, you can write some code, or fix some bugs. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
In the next section we will also look at how to download and compile Eclair directly from the source code. This is an optional and advanced step that will teach you how to use the Scala and Java language build tools and allow you to make modifications to Eclair's source code. With this knowledge, you can write some code or fix some bugs. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
==== Installing Eclair from source code
In this section we will build Eclair from scratch. Eclair is written in the Scala programming language, which is compiled using the Java compiler. To run Eclair, we first need to install Java and its build tools. We will be following the instructions found on the Eclair project in the BUILD.md document:
In this section we will build Eclair from scratch. Eclair is written in the Scala programming language which is compiled using the Java compiler. To run Eclair, we first need to install Java and its build tools. We will be following the instructions found in the BUILD.md document of the Eclair project:
https://github.com/ACINQ/eclair/blob/master/BUILD.md
The Java compiler we need is part of OpenJDK 11. We will also need a buid framework called Maven, version 3.6.0 or above.
The required Java compiler is part of OpenJDK 11. We will also need a buid framework called Maven, version 3.6.0 or above.
On a Debian/Ubuntu Linux system, we can use the apt commands below to install OpenJDK11 and Maven:
On a Debian/Ubuntu Linux system we can use the +apt+ command to install both OpenJDK11 and Maven as shown below:
----
$ sudo apt install -y openjdk-11-jdk maven
----
Check that you have the correct version installed and ready to use by running:
Verify that you have the correct version installed by running:
----
$ javac -version
javac 11.0.7
$ mvn -V
$ mvn -v
Apache Maven 3.6.1
Maven home: /usr/share/maven
Java version: 11.0.7, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64
@ -856,9 +857,9 @@ Java version: 11.0.7, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd6
We have OpenJDK 11.0.7 and Maven 3.6.1, so we're ready.
==== Copying the LND source code
==== Copying the Eclair source code
The source code for Eclair is on Github. The +git clone+ command can create a local copy for us. Let's switch to our home directory and run it there:
The source code for Eclair is on Github. The +git clone+ command can create a local copy for us. Let's change to our home directory and run it there:
----
$ cd
@ -866,11 +867,11 @@ $ git clone https://github.com/ACINQ/eclair.git
----
Once +git clone+ finishes, you will have a sub-directory +Eclair+ containing the source code for the Eclair server.
Once +git clone+ finishes you will have a sub-directory +eclair+ containing the source code for the Eclair server.
==== Compiling the Eclair source code
Eclair uses the +Maven+ build system. To build the project, we change directory to Eclair's source code and then use +mvn package+, like this:
Eclair uses the +Maven+ build system. To build the project we change the working directory to Eclair's source code and then use +mvn package+ like this:
----
$ cd eclair
@ -903,27 +904,27 @@ $ mvn package
----
After several minutes, the Eclair package will be built. You will find the Eclair server node under +eclair-node/target+, packaged as a zip file. Unzip and run it, by following the instructions here:
After several minutes the build of the Eclair package will complete. You will find the Eclair server node under +eclair-node/target+, packaged as a zip file. Unzip and run it, by following the instructions found here:
https://github.com/ACINQ/eclair#installing-eclair
Congratulations, you have built Eclair from source and you are ready to code, test, bug fix, and contribute to this project!
Congratulations! You have built Eclair from source and you are ready to code, test, fix bugs, and contribute to this project!
=== Building a complete network of diverse Lightning Nodes
Our final example, in this section, will bring together all the various containers we have build to form a Lightning network made of diverse (LND, c-lightning, Eclair) node implementations. We will compose the network by connecting the nodes together, opening channels from one node to another, and finally, by routing a payment across these channels.
Our final example, presented in this section, will bring together all the various containers we have built to form a Lightning network made of diverse (LND, c-lightning, Eclair) node implementations. We will compose the network by connecting the nodes together and opening channels from one node to another. As the final step we route a payment across these channels.
In this example, we will replicate the Lighting network example from <<routing_on_a_network_of_payment_channels>>. Specifically, we will create four Lightning nodes named Alice, Bob, Wei and Gloria. We will connect Alice to Bob, Bob to Wei, and Wei to Gloria. Finally, we will have Gloria create an invoice and have Alice pay that invoice. Since Alice and Gloria are not directly connected, the payment will be routed as an HTLC across all the payment channels.
In this example, we will replicate the Lighting network example from <<routing_on_a_network_of_payment_channels>>. Specifically, we will create four Lightning nodes named Alice, Bob, Wei, and Gloria. We will connect Alice to Bob, Bob to Wei, and Wei to Gloria. Finally, we will have Gloria create an invoice and have Alice pay that invoice. Since Alice and Gloria are not directly connected, the payment will be routed as an HTLC across all the payment channels.
==== Using docker-compose to orchestrate docker containers
To make this example work, we will be using a _container orchestration_ tool and command called +docker-compose+. This command allows us to specify an application composed of several containers, and run the application by launching all the containers together.
To make this example work, we will be using a _container orchestration_ tool that is available as a command called +docker-compose+. This command allows us to specify an application composed of several containers and run the application by launching all the cooperating containers together.
First, let's install docker-compose. The instructions depend on your operating system and can be found here:
First, let's install +docker-compose+. The instructions depend on your operating system and can be found here:
https://docs.docker.com/compose/install/
Once you've completed installation, you can confirm you have docker-compose by running:
Once you have completed installation, you can verify your installation by running docker-compose like this:
----
$ docker-compose version
@ -932,11 +933,11 @@ docker-compose version 1.21.0, build unknown
----
The most common docker-compose commands we will use are +up+, and +down+, for example by typing +docker-compose up+.
The most common +docker-compose+ commands we will use are +up+ and +down+, e.g. +docker-compose up+.
==== Docker-compose configuration
The configuration file for docker-compose is found in the +code/docker+ directory and is named +docker-compose.yml+. It contains a specification for a network and each of the four containers, and looks like this:
The configuration file for +docker-compose+ is found in the +code/docker+ directory and is named +docker-compose.yml+. It contains a specification for a network and each of the four containers. The top looks like this:
----
version: "3.3"
@ -960,17 +961,17 @@ services:
container_name: Alice
----
The fragment above defines a network called +lnnet+ and a container called +bitcoind+ which will attach to the +lnnet+ network. The container is the same one we built at the beginning of this chapter. We expose three of the container's ports, which allows us to send commands to it and monitor blocks and transactions. Next, the configuration specifies an LND container called "Alice". Further down you will also see specifications for containers called "Bob" (c-lightning), "Wei" (Eclair) and "Gloria" (LND again).
The fragment above defines a network called +lnnet+ and a container called +bitcoind+ which will attach to the +lnnet+ network. The container is the same one we built at the beginning of this chapter. We expose three of the container's ports allowing us to send commands to it and monitor blocks and transactions. Next, the configuration specifies an LND container called "Alice". Further down you will also see specifications for containers called "Bob" (c-lightning), "Wei" (Eclair) and "Gloria" (LND again).
Since all these diverse implementations follow the Basis of Lightning Technologies (BOLT) specification and have been extensively tested for interoperability, they have no difficulty working together to build a Lightning network.
==== Starting the example Lightning network
Before we get started, we should make sure we're not already running any of the containers, because if the new containers share the same name as one that is already running, they will fail to launch. Use +docker ps+, +docker stop+ and +docker rm+ as necessary to clean up!
Before we get started, we should make sure we're not already running any of the containers. If a new container shares the same name as one that is already running, then it will fail to launch. Use +docker ps+, +docker stop+, and +docker rm+ as necessary to stop and remove any currently running containers!
[TIP]
====
Because we use the same names for these docker containers, we might need to "clean up", to avoid any name conflicts.
Because we use the same names for these orchestrated docker containers, we might need to "clean up" to avoid any name conflicts.
====
To start the example, we switch to the directory that contains the +docker-compose.yml+ configuration file and we issue the command +docker-compose up+:
@ -994,7 +995,7 @@ bitcoind | Starting bitcoind...
[...]
----
Following the start up, you will see a whole stream of log files as each of the nodes starts up and reports its progress. It may look quite jumbled on your screen, but each output line is prefixed by the container name, as you see above. If you wanted to watch the logs from only one container, you can do so in another terminal window, by using the +docker-compose logs+ command with the +f+ (follow) flag and the specific container name:
Following the start up, you will see a whole stream of log files as each of the nodes starts up and reports its progress. It may look quite jumbled on your screen, but each output line is prefixed by the container name as seen above. If you wanted to watch the logs from only one container, you can do so in another terminal window by using the +docker-compose logs+ command with the +f+ (_follow_) flag and the specific container name:
----
$ docker-compose logs -f Alice
@ -1004,11 +1005,11 @@ $ docker-compose logs -f Alice
Our Lightning network should now be running. As we saw in the previous sections of this chapter, we can issue commands to a running docker container with the +docker exec+ command. Regardless of whether we started the container with +docker run+ or started a bunch of them with +docker-compose up+, we can still access containers individually using the docker commands.
To make things easier, we have a little helper script that sets up the network, issues the invoice and makes the payment. The script is called +setup-channels.sh+ and is a Bash shell script. Keep in mind, this script is not very sophisticated! It "blindly" throws commands at the various nodes and doesn't do any error checking. If the network is running correct and the nodes are funded, then it all works nicely. But, you have to wait a bit for everything to boot up and for the network to mine a few blocks and settle down. This usually takes 1-3 minutes. Once you see the block height at 102 or above on each of the nodes, you are ready. If the script fails, you can stop everything (+docker-compose down+) and try again from the beginning, or you can manually issue the commands in the script one by one and look at the results.
To make things easier, we have a little helper script that sets up the network, issues the invoice and makes the payment. The script is called +setup-channels.sh+ and is a Bash shell script. Keep in mind that this script is not very sophisticated! It "blindly" throws commands at the various nodes and doesn't do any error checking. If the network is running correctly and the nodes are funded, then it all works nicely. However, you have to wait a bit for everything to boot up and for the network to mine a few blocks and settle down. This usually takes 1 to 3 minutes. Once you see the block height at 102 or above on each of the nodes, then you are ready. If the script fails, you can stop everything (+docker-compose down+) and try again from the beginning. Or you can manually issue the commands found in the Bash script one by one and look at the results.
[TIP]
====
Beofre running the setup-channels script: Wait a minute or two after starting the network with docker-compose, to make sure all the services are running and all the wallets are funded. To keep things simple, the script doesn't check whether the containers are "ready". Be patient!
Before running the +setup-channels.sh+ script note the following: Wait a minute or two after starting the network with +docker-compose+ to assure that all the services are running and all the wallets are funded. To keep things simple, the script doesn't check whether the containers are "ready". Be patient!
====
Let's run the script to see its effect and then we will look at how it works internally. We use +bash+ to run it as a command:
@ -1042,87 +1043,87 @@ As you can see from the output, the script first gets the node IDs (public keys)
Looking inside the script, we see the part that gets all the node IDs and stores them in temporary variables so that they can be used in subsequent command. It looks like this:
----
alice_address=$(docker-compose exec -T Alice lncli -n regtest getinfo | jq .identity_pubkey)
bob_address=$(docker-compose exec -T Bob lightning-cli getinfo | jq .id)
wei_address=$(docker-compose exec -T Wei eclair-cli -s -j -p eclair getinfo| jq .nodeId)
gloria_address=$(docker-compose exec -T Gloria lncli -n regtest getinfo | jq .identity_pubkey)
alice_address=$(docker-compose exec -T Alice bash -c "lncli -n regtest getinfo | jq .identity_pubkey")
bob_address=$(docker-compose exec -T Bob bash -c "lightning-cli getinfo | jq .id")
wei_address=$(docker-compose exec -T Wei bash -c "eclair-cli -s -j -p eclair getinfo| jq .nodeId")
gloria_address=$(docker-compose exec -T Gloria bash -c "lncli -n regtest getinfo | jq .identity_pubkey")
----
If you have followed the first part of the chapter you will recognise these commands and be able to "decipher" their meaning. It looks quite complex, but we will walk through it step-by-step and you'll quickly get the hang of it.
The first command, for example, sets up a variable called +alice_address+ that is the output of a +docker-compose exec+ command. The +T+ flag tells docker-compose to not open an interactive terminal (an interactive terminal may mess up the output with things like color-coding of results). The +exec+ command is directed to the Alice container and runs the +lncli+ utility, since Alice is an LND node. The +lncli+ command must be told that it is operating on the regtest network and will then issue the +getinfo+ command to LND. The output from +getinfo+ is a JSON-encoded object, which we can parse by piping the output to the +jq+ command. The +jq+ command selects the +identity_pubkey+ field from the JSON object. The contents of the +identity_pubkey+ field are then output and stored in +alice_address+.
The first command sets up a variable called +alice_address+ that is the output of a +docker-compose exec+ command. The +T+ flag tells docker-compose to not open an interactive terminal. An interactive terminal may mess up the output with things like color-coding of results. The +exec+ command is directed to the +Alice+ container and runs the +lncli+ utility since +Alice+ is an LND node. The +lncli+ command must be told that it is operating on the +regtest+ network and will then issue the +getinfo+ command to LND. The output from +getinfo+ is a JSON-encoded object, which we can parse by piping the output to the +jq+ command. The +jq+ command selects the +identity_pubkey+ field from the JSON object. The contents of the +identity_pubkey+ field are then output and stored in +alice_address+.
The following three lines do the same for each of the other nodes. Because they are different node implementations (c-lightning, Eclair), their command-line interface is slightly different, but the general principle is the same: Use the command utility to ask the node for it's public key (node ID) information and parse it with +jq+, storing it in a variable for further use later.
The following three lines do the same for each of the other nodes. Because they are different node implementations (c-lightning, Eclair), their command-line interface is slightly different, but the general principle is the same: Use the command utility to ask the node for its public key (node ID) information and parse it with +jq+, storing it in a variable for further use later.
Next, we tell each node to establish a network connection to the next node and open a channel:
----
docker-compose exec -T Alice lncli -n regtest connect ${bob_address}@Bob
docker-compose exec -T Alice lncli -n regtest openchannel ${bob_address} 1000000
docker-compose exec -T Alice lncli -n regtest connect ${bob_address//\"}@Bob
docker-compose exec -T Alice lncli -n regtest openchannel ${bob_address//\"} 1000000
----
Both of the commands are directed to the Alice container, since the channel will be opened _from_ Alice _to_ Bob, and Alice will initiate the connection.
Both of the commands are directed to the +Alice+ container since the channel will be opened _from_ +Alice+ _to_ +Bob+, and +Alice+ will initiate the connection.
As you can see, in the first command we tell Alice to connect to the Bob node. It's node ID is stored in +${bob_address}+ and it's IP address can be resolved from the name +Bob+ (hence +@Bob+ as the network identifier/address). We do not need to add the port number (9375) because we are using the default Lightning ports.
As you can see, in the first command we tell +Alice+ to connect to the node +Bob+. Its node ID is stored in +${bob_address}+ and its IP address can be resolved from the name +Bob+, hence +@Bob+ is used as the network identifier/address. We do not need to add the port number (9375) because we are using the default Lightning ports.
Next, now that Alice is connected, we open a 1,000,000 satoshi channel to Bob with the +openchannel+ command. Again, we refer to Bob's node by the node ID (i.e. public key).
Now that +Alice+ is connected, we open a 1,000,000 satoshi channel to +Bob+ with the +openchannel+ command. Again, we refer to +Bob+'s node by the node ID, i.e. the public key.
We do the same with the other nodes, setting up connections and channels. Each node type has a slightly different syntax for these commands, but the overall principle is the same:
To Bob's node (c-lightning), we send the command:
To Bob's node (c-lightning) we send these commands:
----
lightning-cli connect ${wei_address}@Wei
lightning-cli fundchannel ${wei_address} 1000000
docker-compose exec -T Bob lightning-cli connect ${wei_address//\"}@Wei
docker-compose exec -T Bob lightning-cli fundchannel ${wei_address//\"} 1000000
----
To Wei's node (Eclair), we send:
To Wei's node (Eclair) we send:
----
eclair-cli -p eclair connect --uri=${gloria_address}@Gloria
eclair-cli -p eclair open --nodeId=${gloria_address} --fundingSatoshis=1000000
docker-compose exec -T Wei eclair-cli -p eclair connect --uri=${gloria_address//\"}@Gloria
docker-compose exec -T Wei eclair-cli -p eclair open --nodeId=${gloria_address//\"} --fundingSatoshis=1000000
----
Now, on Gloria's node, we create a new invoice, for 10,000 satoshi:
At this point we create a new invoice for 10,000 satoshis on Gloria's node:
----
lncli -n regtest addinvoice 10000 | jq .payment_request
gloria_invoice=$(docker-compose exec -T Gloria lncli -n regtest addinvoice 10000 | jq .payment_request)
----
The +addinvoice+ command creates an invoice for the specified amount (in satoshis) and produces a JSON object with the invoice details. From that JSON object, we only need the actual bech32-encoded payement request, and we use +jq+ to extract it.
The +addinvoice+ command creates an invoice for the specified amount in satoshis and produces a JSON object with the invoice details. From that JSON object we only need the actual bech32-encoded payment request, which we extract with +jq+.
Next, we have to wait. We just created a bunch of channels, which means that our nodes broadcast a bunch of funding transactions. The channels can't be used until the funding transactions are mined with 6 confirmations. Since our Bitcoin regtest blockchain is set to mine blocks every ten seconds, we have to wait 60 seconds for all the channels to be ready to use.
Next, we have to wait. We just created a bunch of channels. Hence, our nodes broadcast several funding transactions. The channels can't be used until the funding transactions are mined and collect 6 confirmations. Since our Bitcoin +regtest+ blockchain is set to mine blocks every ten seconds, we have to wait 60 seconds for all the channels to be ready to use.
The final command is the actual payment. We connect to Alice's node and present Gloria's invoice for payment.
The final command is the actual invoice payment. We connect to Alice's node and present Gloria's invoice for payment.
----
lncli -n regtest payinvoice --json --inflight_updates -f ${gloria_invoice}
docker-compose exec -T Alice lncli -n regtest payinvoice --json --inflight_updates -f ${gloria_invoice//\"}
----
We ask Alice's node to pay the invoice, but also ask for +inflight_updates+ in +json+ format. That will give us detailed output about the invoice, the route, the HTLCs and the final payment result, so we can study and learn!
We ask Alice's node to pay the invoice, but also ask for +inflight_updates+ in +json+ format. That will give us detailed output about the invoice, the route, the HTLCs, and the final payment result. We can study this additional output and learn from it!
Since Alice's node doesn't have a direct channel to Gloria, her node has to find a route. There's only one viable route here (Alice->Bob->Wei->Gloria), which Alice will be able to discover now that all the channels are active and have been advertised to all the nodes by the Lightning gossip protocol. Alice's node will construct the route and create an onion packet to establish HTLCs across the channels. All of this happens in a fraction of a second and Alice's node will report the result of the payment attempt. If all goes well, you will see the last line of the JSON output showing:
Since Alice's node doesn't have a direct channel to Gloria, her node has to find a route. There is only one viable route here (Alice->Bob->Wei->Gloria), which Alice will be able to discover now that all the channels are active and have been advertised to all the nodes by the Lightning gossip protocol. Alice's node will construct the route and create an onion packet to establish HTLCs across the channels. All of this happens in a fraction of a second and Alice's node will report the result of the payment attempt. If all goes well, you will see the last line of the JSON output showing:
----
"failure_reason": "FAILURE_REASON_NONE"
----
Arguably, this is a weird message, but technically if there is no failure reason, it is a success!
This is arguably a weird message, but the fact that there was no failure reason, in a round-about way, implies that the operation was a success!
Scrolling above that funny message you will see all the details of the payment. There's a lot to review, but as you gain understanding of the underlying technology, more and more of that information will become clear. Come back to this example later.
Scrolling above that unusual message you will see all the details of the payment. There is a lot to review, but as you gain understanding of the underlying technology, more and more of that information will become clear. You are invited to revisit this example later.
Of course, you could do a lot more with this test network than a 3-channel, 4-node payment. Here are some ideas for your experiments:
Of course, you can do a lot more with this test network than a 3-channel, 4-node payment. Here are some ideas for your experiments:
* Create a more complex network by launching many more nodes of different types. Edit the +docker-compose.yml+ file and copy sections, renaming as needed.
* Create a more complex network by launching many more nodes of different types. Edit the +docker-compose.yml+ file and copy sections, renaming containers as needed.
* Connect the nodes in more complex topologies: circular routes, hub-and-spoke, full mesh
* Connect the nodes in more complex topologies: circular routes, hub-and-spoke, or full mesh.
* Run lots of payments to exhaust channel capacity. Then run payments in the opposite direction to rebalance the channels. See how the routing algorithm adapts.
* Change the channel fees to see how the routing algorithm negotiates multiple routes and what optimizations it applies. Is a cheap long route better than an expensive short route?
* Change the channel fees to see how the routing algorithm negotiates multiple routes and what optimizations it applies. Is a cheap, long route better than an expensive, short route?
* Run a circular payment from a node back to itself, in order to rebalance it's own channels. See how that affects all the other channels and nodes.
* Run a circular payment from a node back to itself in order to rebalance its own channels. See how that affects all the other channels and nodes.
* Generate hundreds or thousands of small invoices in a loop and then pay them as fast as possible in another loop. See how many transactions per second you can squeeze out of this test network.
* Generate hundreds or thousands of small invoices in a loop and then pay them as fast as possible in another loop. Measure how many transactions per second you can squeeze out of this test network.
=== Conclusion
In this chapter we looked at various projects which implement the BOLT specifications. We built containers to run complete networks and learned how to build each project from source code. You are now ready to explore and dig deeper.
In this chapter we looked at various projects that implement the BOLT specifications. We built containers to run a sample Lightning network and learned how to build each project from source code. You are now ready to explore further and dig deeper.

View File

@ -1,31 +1,895 @@
[[maintaining_a_lightning_node]]
== Running and Maintaining a Lightning Network Node
[[operating_ln_node]]
== Operating a Lightning Network Node
=== Implications of Configurations
After having read this far, you have probably set up a Lightning wallet. In this chapter we will take things one step further and set up a full Lightning node. In addition to setting one up, we will learn how to operate it and maintain it over time.
=== Backups
There are many reasons why you might want to set up your own Lightning node. They include:
==== Static channel backups
* To be a full, active participant in the Lightning Network, not just an end-user.
* To run an e-commerce store or receive income via Lightning payments.
* To earn income from Lightning routing fees.
* To develop new services, applications, or plugins for the Lightning Network.
* To increase your financial privacy while using Lightning.
* To use some apps built on top of Lightning, like Lightning-powered Instant Messaging apps.
* For financial freedom, independence, and sovereignty.
=== Security of your machine
There are costs associated with running a Lightning Network node. You need a computer, a permanent Internet connection, lots of disk space, and lots of time!
Operational costs will include electricity expenses.
But the skills you will learn from this experience are valuable and can be applied to a variety of other tasks too.
Let's get started!
=== Choosing your platform
There are many ways you can run a Lightning node ranging from a small mini-PC hosted in your home or a dedicated server to a hosted server in the cloud. The method you choose will depend on the resources you have and how much money you want to spend.
==== Why is reliability important for running a Lightning Node?
In Bitcoin hardware is not particularly important unless one is specifically running a mining node.
The Bitcoin Core node software can be run on any machine that meets its minimum requirements and does not need to be online to receive payments; only to send them.
If a Bitcoin node goes down for an extended period of time, the user can simply reboot the node and once it connects to the rest of the network, it will re-sync the blockchain.
In Lightning, however, the user needs to be online both to send _and_ to receive payments. If the Lightning user is offline it cannot receive any payments from anyone and thus its open invoices cannot be fulfilled.
Furthermore, the open channels of an offline node cannot be used to route payments. Your channel partners will notice that you are offline and cannot contact you to route a payment. If you are offline too often, they may consider the Bitcoin locked up in their channels with you to be underutilized capacity, and may close those channels. We already discussed the case of a protocol attack where your channel partner tries to cheat you by submitting an earlier commitment transaction. If you are offline and your channels aren't being monitored, then the attempted theft could succeed and you will have no recourse once the timelock expires.
Hence, node reliability is extremely important for a Lightning node.
There are also the issues of hardware failure and loss of data. In Bitcoin, a hardware failure can be a trivial problem if the user has a backup of their mnemonic phrase or private keys. The Bitcoin wallet and the bitcoin inside the wallet can be easily restored from the private keys on a new computer. Most information can be re-downloaded from the blockchain.
In contrast, in Lightning the information about the user's channels, including the commitment transactions and revocation secrets, are not publicly known and are only stored on the individual user's hardware.
Thus, software and hardware failures in the Lightning Network can easily result in loss of funds.
==== What are the types of hardware Lightning Nodes?
* **General Purpose Computers**: a Lightning Network node can be run on a home computer or laptop running Windows, Mac OS, or Linux. Typically this is run alongside a Bitcoin node.
* **Dedicated Hardware**: a Lightning Node can also be run on dedicated hardware like a Raspberry Pi, Rock64, or mini-PC. This setup would usually run a software stack including a Bitcoin node and other applications. This setup is popular as the hardware is dedicated to running and maintaining the Lightning node only and is usually set up with an installation "helper".
* **Pre-Configured Hardware**: a Lightning Network node can also be run on purpose-built hardware specifically selected and configured for it. This would include "out-of-the-box" Lightning node solutions that can be purchased as a kit or a turn-key system.
==== Running in the "cloud"
_Virtual Private Server_ (VPS) and "cloud computing" services such as Microsoft Azure, Google Cloud, Amazon Web Services (AWS), or DigitalOcean are quite affordable and can be set up very quickly. A Lightning node can be hosted for between $20 and $40 per month on such a service.
However, as the saying goes, "'Cloud' is just other people's computers". Using these services means running your node on other people's computers. This brings along the corresponding advantages and disadvantages. The key advantages are convenience, efficiency, uptime, and possibly even cost. The cloud operator manages and runs the node to a high degree automatically providing you with convenience and efficiency. They provide excellent uptime and availability, often much better than what an individual can achieve at home. If you consider that just the electricity cost of running a server in many western countries is around $10 per month, then add to that the cost of network bandwidth and the hardware itself, the VPS offering becomes financially competetive. Lastly, with a VPS you need no space for a PC at home, and don't have any issues with PC noise or heat.
On the other hand there are several notable disadvantages. A Lightning node running in the "cloud" will always be less secure and less private that one running on your own computer. Additionally, these cloud computing services are very centralized. The vast majority of Bitcoin and Lightning nodes running on such services are located in a handful of data centers in Virginia, Sunnyvale, Seattle, London, and Frankfurt. When the networks or data centers of these providers have service problems, it affects thousands of nodes on so-called "decentralized" networks.
If you have the possibility and capacity of running a node on your own computer at home or in your office, then this might be preferable to running it
in the cloud. Nonetheless, if running your own server is not an option, by all means consider running one on a VPS.
==== Running a node at home
If you have a reasonable capacity internet link at home or in your office, you can certainly run a Lightning node there. Any "broadband" connection is sufficient for the purpose of running a lightweight node, and a fast connection will allow you to run a Bitcoin full node too.
While you can run a Lightning node (and even a Bitcoin node) on your laptop, it will become annoying quite fast. These programs consume your computer's resources and need to run 24/7. Your user applications like your browser or your spreadsheet will find themselves competing against the Lightning background services for your computer's resources. In other words, your browser and other desktop workloads will be slowed down.
And when your word processing app freezes up your laptop, your Lightning node will go down as well. Furthermore, you should never turn off your laptop.
All this combined together results in a set-up that is not ideal. The same will apply to your daily-use personal desktop PC.
Instead, most users will choose to run a node on a dedicated computer. Fortunately, you don't need a "server" class computer to do this. You can run a Lightning node on a mini-PC, such as a Raspberry Pi or an Atom-based fanless PC. These are simple computers which are commonly used as a media server or home automation hub. They are relatively inexpensive, costing between $50 and $150 USD at that time of this writing. To run on a mini-PC, you will need an external USB hard drive, which again is relatively inexpensive, costing approximately $50 USD. The advantage of a dedicated mini-PC as a platform for Lightning and Bitcoin nodes is that it can run continuously, silently, and unobtrusively on your home WiFi network, tucked behind your router or TV. No one will even know that this little box is actually part of a global banking system!
==== What hardware is required to run a Lightning node?
At a minimum, the following will be required to run a Lightning Node:
* **CPU**: sufficient processing power will be required to run a Bitcoin node, which will continuously download and validate new blocks. The user also needs to consider the Initial Block Download (IBD) when setting up a new Bitcoin node, which can take anywhere from several hours to several days. A 2-core or 4-core CPU is recommended.
* **RAM**: a system with 2GB of RAM will _barely_ run both Bitcoin and Lightning nodes. It will perform much better with at least 4GB of RAM. The Initial Block Download will be especially challenging with less than 4GB of RAM. More than 8GB of RAM is unnecessary, because the CPU is the greater bottleneck for these types of services, due to cryptographic operations such as signature validation.
* **Storage Drive**: this can be a Hard Drive or an Solid State Drive (SSD). An SSD will be significantly quicker for running a Bitcoin node. Most of the storage is used for the Bitcoin blockchain, which occupies more than 340GB (as of September 2020).
* **Internet Connection**: a reliable internet connection will be required to download new Bitcoin blocks, as well as to communicate with other Lightning peers. During operation the estimated data use ranges from 10GB to 100GB per month, depending on configuration. At startup a Bitcoin full node downloads the full blockchain, 340GB as of September 2020.
* **Power Supply**: a reliable power supply is required as Lightning nodes need to be online at all times. A power failure will cause in-progress payments to fail. For heavy duty routing nodes, a backup or uninterruptible power supply (UPS) is useful in the case of power outages.
Ideally, you should connect your Internet router to this UPS as well.
* **Backup**: Backup is crucial as a failure can result in loss of data and hence in loss of funds.
The user will want to consider some kind of data backup solution. This could be a cloud-based automated backup to a server or web service the user controls. Alternatively, it could be an automated local hardware backup, such as a second hard drive. For best results, both local and remote backup can be combined.
==== Switching server configuration in the cloud
When renting a cloud server, it is often cost effective to change the configuration between two phases of operation: A faster CPU and faster storage will be needed during the Initial Block Download (e.g. the first day). After the blockchain has synced, the CPU and storage speed requirements are much less, so the performance can be downgraded to a more cost-effective level.
For example, on Amazon's cloud, we would use a 8-16GB RAM, 8-core CPU (e.g. t3-large or m3.large) and faster 400GB SSD (1000+ provisioned IOPS) for the Initial Block Download (IBD), reducing its time to just 6-8 hours. Once that is complete, we would switch the server instance to a 2GB RAM, 2-core CPU (e.g. t3.small) and storage to a general purpose 1TB HDD. This will cost about the same as if you ran it on the slower server the entire time, but it will get you up and running in less than a day instead of having to wait almost a week for the IBD.
===== Permanent data storage (drive)
If you use a mini-PC or rent a server, the storage can be the costliest part, costing as much as the computer and connectivity (data) added together.
Let's have a look at the different options available. First there are two main types of drives, Hard Disk Drives (HDDs) and Solid State Drives (SSDs). HDDs are cheaper and SSDs are faster, but both do the job.
The fastest SSDs available today use the NVMe interface. The NVMe SSDs are faster in high end machines, but also more costly.
Traditional SATA-based SSDs are cheaper, but not as fast. SATA SSDs perform sufficiently well for your node setup.
Smaller computers might not be able to take advantage of NVMe SSDs.
For example, the Raspberry Pi 4 cannot benefit from them because of the limited bandwidth of its USB port.
To choose the size, let's look at the Bitcoin blockchain. As of September 2020 its size is 340GB including the transaction index. If you want to have some margin available for future growth or to install other data on your node, purchase at least a 512GB drive or better yet a 1TB drive.
=== Using an installer or helper
Installing a Lightning node or a Bitcoin node may be daunting if you are not familiar with a command-line environment. Luckily, there are a number of projects that make "helpers", i.e. software that installs and configures the various components for you. You will still need to learn some command-line incantations to interact with your node, but most of the initial work is done for you.
==== RaspiBlitz
One of the most popular and complete "helper" is _RaspiBlitz_, a project built by Christian Rootzoll. It is intended to be installed on a Raspberry Pi 4. RaspiBlitz comes with a recommended hardware "kit" that you can build in a matter of hours or at most a weekend. If you attend a Lightning "hackathon" in your city, you are likely to see many people working on their RaspiBlitz setup, swapping tips, and helping each other. You can find the RaspiBlitz project here:
https://github.com/rootzoll/raspiblitz
image::[images/raspiblitz.jpg]
In addition to a Bitcoin and Lightning node, RaspiBlitz can install a number of additional services, such as:
* TOR (Run as Hidden Service)
* ElectRS (Electrum Server in Rust)
* BTCPayServer (Cryptocurrency Payment Processor)
* BTC-RPC-Explorer (Bitcoin Blockchain Explorer)
* LNbits (Lightning wallet/accounts System)
* SpecterDesktop (Multisig Trezor, Ledger, Coldcard Wallet & Specter-DIY)
* LNDmanage (Advanced Channel Management CLI)
* Loop (Submarine Swaps Service)
* JoinMarket (CoinJoin Service)
==== MyNode
_MyNode_ is another popular open source "helper" project including a lot of Bitcoin related software. Is is easy to install: you "flash" the installer onto an SD card and boot your mini-PC from the SD card. You do not need any monitor to use myNode as the administrative tools are accessible remotely from a browser. If your mini-PC has no monitor, mouse, or keyboard, you can manage it from another computer or even from your smartphone. Once installed, go to http://mynode.local/ and create a Lightning wallet and node in two clicks.
You can find the MyNode project here:
https://mynodebtc.com/
In addition to a Bitcoin and Lightning node, MyNode can optionally install a variety of additional services, such as:
- Ride The Lightning (Lightning Node Management GUI)
- VPN Support for remote management or wallet (OpenVPN)
- lndmanage (CLI management tool)
- btc-rpc-explorer (A Bitcoin blockchain explorer)
==== BTCPay Server
While not initially designed as an installation "helper", the e-commerce and payment platform _BTCPay Server_ has an incredibly easy installation system that uses docker containers and +docker-compose+ to install a Bitcoin node, Lightning node, and payment gateway, among many other services. It can be installed on a variety of hardware platforms, from a simple Raspberry Pi 4 (4GB recommended) to a mini-PC, old laptop, desktop or server.
BTCPay Server is a fully-featured self-hosted self-custody e-commerce platform that can be integrated with many e-commerce platforms such as Wordpress Woocommerce and others. The installation of the full node is only a step of the e-commerce platform installation.
While originally developed as a feature-for-feature replacement of the _Bitpay_ commercial payment service and API, it has evolved past that to be a complete platform for BTC and Lighting services related to e-commerce. For many sellers or shops it is a one-shop turn-key solution to e-commerce.
More information can be found at:
https://btcpayserver.org/
In addition to a Bitcoin and Lightning node, BTCPay Server can also install a variety of services, including:
- c-Lightning or LND Lightning node
- Litecoin support
- Monero support
- Spark server (c-lightning web wallet)
- Charge server (c-lightning e-commerce API)
- Ride The Lightning (Lightning node management web GUI)
- Many BTC forks
- BTCTransmuter (event-action automation service supporting currency exchange)
The number of additional services and features is growing rapidly, so the list above is only a small subset of what is available on the BTCPay Server platform.
==== Bitcoin node or lightweight Lightning
One critical choice for your setup will be the choice of the Bitcoin node and its configuration. _Bitcoin Core_, the reference implementation, is the most common choice but not the only choice available. One alternative choice is _btcd_, which is a Go-language implementation of a Bitcoin node. Btcd supports some features that are useful for running an LND Lightning node and are not available in Bitcoin Core.
A second consideration is whether you will run an _archival_ Bitcoin node with a full copy of the blockchain (some 350GB in mid-2020) or a _pruned_ blockchain that only keeps the most recent blocks. A pruned blockchain can save you some disk space but will still need to download the full blockchain at least once (during the Initial Block Download). Hence it won't save you any network traffic. Using a pruned node to run a Lightning node is still an experimental capability and might not support all the functionality. However, many people are running a node like that successfully.
Finally, you also have the option of not running a Bitcoin node at all. Instead you can operate the LND Lightning node in "lightweight" mode, using the _neutrino_ protocol to retrieve blockchain information from public Bitcoin nodes operated by others. Running like this means that you are taking resources from the Bitcoin network without offering any in return. Instead, you are offering your resources and contributing to the Lightning Network community. For smaller Lightning nodes this will generally reduce network traffic in comparison to running a full Bitcoin node.
Keep in mind that operating a Bitcoin node allows you to support other services, besides and on top of a Lightning node. These other services may require an archival (not pruned) Bitcoin node and often can't run without a Bitcoin node. Consider upfront what other services you may want to run now or in the future, to make an informed decision on the type of Bitcoin node you select.
The bottom line for this decision is: If you can afford a disk larger than 500GB, run a full archival Bitcoin node. You will be contributing resources to the Bitcoin system and helping others who cannot afford to do so. If you can't afford such a big disk, run a pruned node. If you can't afford the disk or the bandwidth for even a pruned node, run a lightweight LND node over neutrino.
==== Operating system choice
The next step is to select an operating system for your nodes. The vast majority of internet servers run on some variant of Linux. Linux is the platform of choice for the internet because it is a powerful, open-source operating system. Linux, however, has a steep learning curve and requires familiarity with a command-line environment. It is often intimidating for new users.
Ultimately, most of the services can be run on any modern POSIX operating system, which includes Mac OS, Windows, and of course Linux. Your choice should be driven more by your familiarity and comfort with an operating system and your learning objectives. If you want to expand your knowledge and learn how to operate a Linux system, this is a great opportunity to do so with a specific project and a clear goal. If you just want to get a node up and running, go with what you know.
Nowadays, many services are also delivered in the form of containers, usually based on the _Docker_ system. These containers can be deployed on a variety of operating systems, abstracting the underlying OS. You may need to learn some Linux CLI commands nonetheless, as most of the containers run some variant of Linux inside.
=== Choose your Lightning node implementation
As with the choice of operating system, your choice of Lightning node implementation should depend primarily on your familiarity with the programming language and development tools used by the projects. While there are some small differences in features between the various node implementations, those are relatively minor and most implementations converge on the common standards defined by the BOLTs.
Familiarity with the programming language and build system, on the other hand, is a good basis for choosing a node. That's because installation, configuration, ongoing maintenance, and troubleshooting will all involve interacting with the various tools used by the build system. This includes:
* make, autotools, and GNU utilities for c-lightning
* go utilities for LND
* Java/Maven for Eclair
The programming language doesn't just influence the choice of build system but also many other aspects of the program. Each programming language comes with a whole design philosophy and affects many other aspects, such as:
* Format and syntax of configuration files
* File locations (in the filesystem)
* Command-line arguments and their syntax
* Error message formatting
* Prerequisite libraries
* Remote Procedure Call interfaces
When you choose your Lightning node, you are also choosing all of the above characteristics. So your familiarity with these tools and design philosophies will make it easier to run a node. Or harder, if you land in an unfamiliar domain.
On the other hand, if this is your first foray into the command-line and server/service environment, you will find yourself unfamiliar with any implementation and have the opportunity to learn something completely new. In that case you might want to decide based on a number of other factors, such as:
* Quality of support forums and chat rooms
* Quality of documentation
* Degree of integration with other tools you want to run
As a final consideration, you may want to examine the performance and reliability of different node implementations. This is especially important if you will be using this node in a production environment and expect heavy traffic and high reliability requirements. This might be the case if you plan to run the payment system of a shop on it.
=== Installing a Bitcoin or Lightning node
You decided not to use an installation "helper" and instead to dive into the command-line of a Linux operating system? That is a brave decision and we'll try to help you make it work. If you'd rather not try to do this manually, consider using an application that helps you install the node software or a container based solution, as described in <<helpers>>.
[WARNING]
====
This section will delve into the advanced topic of system administration from the command-line. You will need to do additional research and learn more skills not covered here. Linux administration is a complicated topic and there are many pitfalls. Proceed with caution!
====
In the next few sections we will briefly describe how to install and configure a Bitcoin and Lightning node on a Linux operating system. You will need to review the installation instructions for the specific Bitcoin and Lightning node applications you decided to use. You can usually find these in a file called +INSTALL+ or in the +docs+ sub-directory of each project. We will only describe some of the common steps that apply to all such services, and the instructions we offer will be necessarily incomplete.
==== Background services
For those accustomed to running applications on their desktop or smartphone, an application always has a graphical user interface even if it may sometimes run in the background. The Bitcoin and Lightning node applications, however, are very different. These applications do not have a graphical user interface built in. Instead, they run as _headless_ background services, meaning they are always operating in the background and do not interact with the user directly.
This can create some confusion for users who are not used to running background services. How do you know if such a service is currently running? How do you start and stop it? How do you interact with it? The answers to these questions depend on the operating system you are using. For now we will assume you are using some Linux variant and answer them in that context.
==== Process isolation
Background services usually run under a specific user account in order to isolate them from the operating system and each other. For example, Bitcoin Core is configured to run as user +bitcoin+. You will need to use the command-line to create a user for each of the services you run.
In addition, if you have connected an external drive, you will need to tell the operating system to relocate the user's home directory to that drive. That's because a service like Bitcoin Core will create files under the user's home directory. If you are setting it up to download the full Bitcoin blockchain, these files will take up several hundred Gigabytes. Here, we assume you have connected the external drive and it is located on the +/external_drive/+ path of the operating system.
On most Linux systems you can create a new user with the +useradd+ command, like this:
----
$ sudo useradd -d /external_drive/bitcoin -s /dev/null bitcoin
----
The +d+ flag assigns the user's home directory. In this case, we put it on the external drive. The +s+ flag assigns the user's interactive shell. In this case we set it to +/dev/null+ to disable interactive shell use. The last argument is the new user's username +bitcoin+.
==== Node startup
For both Bitcoin and Lightning node services, "installation" also involves creating a so called _startup script_ to make sure that the node starts when the computer boots. Startup and shutdown of background services is handled by an operating system process, which in Linux is called _init_ or _systemd_. You can usually find a system startup script in the +contrib+ sub-directory of each project. For example, if you are on a modern Linux OS that uses +systemd+, you would find a script called +bitcoind.service+, that can start and stop the Bitcoin Core node service.
Here's an example of what a Bitcoin node's startup script looks like, taken from the Bitcoin Core code repository:
.From bitcoin/contrib/init/bitcoind.service
----
[Unit]
Description=Bitcoin daemon
After=network.target
[Service]
ExecStart=/usr/bin/bitcoind -daemon \
-pid=/run/bitcoind/bitcoind.pid \
-conf=/etc/bitcoin/bitcoin.conf \
-datadir=/var/lib/bitcoind
# Make sure the config directory is readable by the service user
PermissionsStartOnly=true
ExecStartPre=/bin/chgrp bitcoin /etc/bitcoin
# Process management
####################
Type=forking
PIDFile=/run/bitcoind/bitcoind.pid
Restart=on-failure
TimeoutStopSec=600
# Directory creation and permissions
####################################
# Run as bitcoin:bitcoin
User=bitcoin
Group=bitcoin
# /run/bitcoind
RuntimeDirectory=bitcoind
RuntimeDirectoryMode=0710
# /etc/bitcoin
ConfigurationDirectory=bitcoin
ConfigurationDirectoryMode=0710
# /var/lib/bitcoind
StateDirectory=bitcoind
StateDirectoryMode=0710
[...]
[Install]
WantedBy=multi-user.target
----
As the root user, install the script by copying it into the +systemd+ service folder +/lib/systemd/system/+ and then reload +systemd+:
----
$ sudo systemctl daemon-reload
----
Next, enable the service:
----
$ sudo systemctl enable bitcoind
----
You can now start and stop the service. Don't start it yet, as we haven't configured the Bitcoin node.
----
$ sudo systemctl start bitcoind
$ sudo systemctl stop bitcoind
----
==== Node configuration
To configure your node, you need to create and reference a configuration file. By convention, this file is usually created in +/etc+, under a directory with the name of the program. For example, Bitcoin Core and LND configurations would usually be stored in:
+/etc/bitcoin/bitcoin.conf+
+/etc/lnd/lnd.conf+
These configuration files are text files with each line expressing one configuration option and its value. Default values are assumed for anything not defined in the configuration file. You can see what options can be set in the configuration in two ways. First, running the node application with a +help+ argument will show the options that can be defined on the command-line. These same options can be defined in the configuration file. Second, you can usually find an example configuration file, with all the default options, in the code repository of the software.
You can find one example of a configuration file in each of the docker images we used in <<set_up_a_lightning_node>>. For example, the file +code/docker/bitcoind/bitcoind/bitcoin.conf+:
.Configuration file for docker bitcoind (code/docker/bitcoind/bitcoind/bitcoin.conf)
----
include::code/docker/bitcoind/bitcoind/bitcoin.conf[]
----
That particular configuration file configures Bitcoin Core for operation as a +regtest+ node and provides a weak username and password for remote access, so you shouldn't use it for your node configuration. However, it serves to illustrate the syntax of a configuration file and you can make adjustments to it in the docker container to experiment with different options. See if you can use the +bitcoind -help+ command to understand what each of the options does in the context of the docker network we build in <<set_up_a_lightning_node>>.
Often, the defaults suffice, and with a few modifications your node software can be configured quickly. To get a Bitcoin Core node running with minimal customization, you only need four lines of configuration:
----
server=1
daemon=1
txindex=1
rpcuser=USERNAME
rpcpassword=PASSWORD
----
Even the +txindex+ option is not strictly necessary, though it will ensure your Bitcoin node creates an index of all transactions, which is required for some applications. The +txindex+ option is not required to run a Lightning node.
A c-lightning Lightning node running on the same server also only requires a few lines in the configuration:
----
network=mainnet
bitcoin-rpcuser=USERNAME
bitcoin-rpcpassword=PASSWORD
----
In general, it is a good idea to minimize the amount of customization of these systems. The default configuration is carefully designed to support the most common deployments. If you modify a default value, it may cause problems later on or reduce the performance of your node. In short, modify only when necessary!
==== Network configuration
Network configuration is normally not an issue when configuring a new application. However, peer-to-peer networks like Bitcoin and the Lightning network present some unique challenges for network configuration.
In a centralized service, your computer connects to the "big servers" of some corporation, and not vice-versa. Your home Internet connection is actually configured on the assumption that you are simply a consumer of services provided by others. But in a peer-to-peer system, every peer both consumes from and provides services to other nodes. If you're running a Bitcoin or Lightning node at your home, you are providing a service to other computers on the internet. Your internet service by default is not configured to allow you to run servers and may need some additional configuration to enable others to reach your node.
If you want to run a Bitcoin or Lightning node, you need to make it possible for other nodes on the internet to connect to you. That means enabling incoming TCP connections to the Bitcoin port (port 8333 by default) or Lightning port (port 9735 by default). While you can run a Bitcoin node without incoming connectivity, you can't do that with a Lightning node. A Lightning node must be accessible to others from outside your network.
By default, your home internet router does not expect incoming connections from the outside, and in fact incoming connections are blocked. Your internet router IP address is the only externally accessible IP address, and all the computers you run inside your home network share that single IP address. This is achieved by a mechanism called _Network Address Translation (NAT)_ which allows your internet router to act as an intermediary for all outbound connections. If you want to allow an inbound connection you have to set up _Port Forwarding_, which tells your internet router that incoming connections on specific ports should be forwarded to specific computers inside the network. You can do this manually by changing your internet router configuration or, if your router supports it, through an automatic port forwarding mechanism called _Universal Plug and Play (UPNP)_.
An alternative mechanism to port forwarding is to enable The Onion Router (TOR), which provides a kind of virtual private network overlay that allows incoming connections to an _onion address_. If you run TOR, you don't need to do port forwarding nor enable incoming connections to Bitcoin or Lightning ports. If you run your nodes using TOR, all traffic goes through TOR and no other ports are used.
Let's look at different ways you can make it possible for others to connect to your node. We'll look at these alternatives in order, from easiest to most difficult.
===== It just works!
There is a possibility that your internet service provider or router is configured to support UPNP by default and everything just works automatically. Let's try this approach first, just in case we are lucky.
Assuming you already have a Bitcoin or Lightning node running, we will try and see if they are accessible from the outside.
[NOTE]
====
For this test to work, you have to have either a Bitcoin or Lightning node (or both) up and running on your home network. If your router supports UPNP, the incoming traffic will automatically be forwarded to the corresponding ports on the computer running the node.
====
You can use some very popular and useful websites to find out what is your external IP address and whether it allows and forwards incoming connections to a known port. Here are two that are reliable:
https://canyouseeme.org/
https://www.whatismyip.com/port-scanner/
By default, these services only allow you to check incoming connections to the IP address from which you are connecting. This is done to prevent you from using the service to scan other people's networks and computers. You will see your router's external IP address and a field for entering a port number. If you haven't changed the default ports in your node configuration, try port 8333 (Bitcoin) and/or 9735 (Lightning).
[[ln_port_check]]
. Checking for incoming port 9735
image::images/ln_port_check.png[]
In <<ln_port_check>> you can see the result of checking port 9735 on a server running Lightning, using the +whatismyip.org+ port scanner tool. It shows that the server is accepting incoming connections to the Lightning port. If you see a result like this, you are all set!
===== Automatic port forwarding using UPNP
Sometimes, even if your internet router supports UPNP, it may be turned off by default. In that case you need to change your internet router configuration from its web administration interface:
. Connect to your internet router's configuration website. Usually this can be done by connecting to the _gateway address_ of your home network using a web browser. You can find the gateway address by looking at the IP configuration of any computer on your home network. It is often the first address in one of the non-routable networks, like 192.168.0.1 or 10.0.0.1. Check all stickers on your router as well for the _gateway address_. Once found, open a browser and enter the IP address into the browser URL/Search box, e.g. "192.168.0.1" or "http://192.168.0.1".
. Find the administrator username and password for the web configuration panel of the router. This is often written on a sticker on the router itself and may be as simple as "admin" and "password". A quick web search for your ISP and router model can also help you find this information.
. Find a setting for UPNP and turn it on.
Restart your Bitcoin and/or Lighting node and repeat the open port test with one of the websites we used in the previous section.
===== Using TOR for incoming connections
_The Onion Router (TOR)_ is a virtual private network with the special property that it encrypts communications between hops, such that any intermediary node cannot determine the origin or destination of a packet. Both Bitcoin and Lightning nodes support operation over TOR, which enables you to operate a node without revealing your IP address or location. Hence, it provides a high level of privacy to your network traffic. An added benefit of running TOR is that because it operates as a VPN, it resolves the problem of port forwarding from your internet router. Incoming connections are received over the TOR tunnel, and your node can be found through an ad-hoc generated _onion address_ instead of an IP address.
Enabling TOR requires two steps: First you must install the TOR router and proxy on your computer. Second, you must enable the use of the TOR proxy in your Bitcoin or Lightning configuration.
To install TOR on a Ubuntu Linux system that uses the +apt+ package manager, run:
----
sudo apt install tor
----
Next, we configure our Lightning node to use TOR for its external connectivity. Here is an example configuration for LND:
----
[Tor]
tor.active=true
tor.v3=true
tor.streamisolation=true
listen=localhost
----
This will enable TOR (+tor.active+), establish a v3 onion service (+tor.v3=true+), use a different onion stream for each connection (+tor.streamisolation+), and restrict listening for connections to the local host only, to avoid leaking your IP address (+listen=localhost+).
You can check if TOR is correctly installed and working by running a simple one-line command. This command should work on most flavors of Linux:
----
curl --socks5 localhost:9050 --socks5-hostname localhost:9050 -s https://check.torproject.org/ | cat | grep -m 1 Congratulations | xargs
----
If everything is working properly, the response of this command should be +"Congratulations. This browser is configured to use Tor."+.
Due to the nature of TOR, you can't easily use an external service to check if your node is reachable via an onion address. Nonetheless, you should see your TOR onion address in the logs of your Lightning node. It is a long string of letters and numbers followed by the suffix +.onion+. Your node should now be reachable from the internet, with the added bonus of privacy!
===== Manual port forwarding
This is the most complex process and requires quite a bit of technical skill. The details depend on the type of internet router you have, your service provider settings and policies, and a lot of other context. Try UPNP or TOR first, before you try this much more difficult mechanism.
The basic steps are as follows:
. Find the IP address of the computer your node is on. This is usually dynamically allocated by the Dynamic Host Configuration Protocol (DHCP) and is often somewhere in the 192.168.0.X or 10.0.0.X range.
. Find the Media Access Control (MAC) address of your node's network interface. This can be found in the internet settings of that computer.
. Assign a static IP address for your node so that it is always the same one. You can use the IP address it currently has. On your internet router, look for "Static Leases" under the DHCP configuraiton. Map the MAC address to the IP address you selected. Now your node will always have that IP address allocated to it. Alternatively, you can look at your router's DHCP configuration and find out what its DHCP address range is. Select an unused address _outside_ of the DHCP address range. Then, on the server, configure the network to stop using DHCP and hard-code the selected non-DHCP IP address into the operating system network configuration.
. Finally, set up "Port Forwarding" on your internet router to route incoming traffic on specific ports to the selected IP address of your server.
Once done reconfiguring, repeat the port check using one of the websites from the previous sections.
=== Security of your node
A Lightning node is, by definition, a hot-wallet. That means that the funds (both on-chain and off-chain) controlled by a Lightning node are directly controlled by keys that are loaded in the node's memory. If a Lightning node is compromised, it is trivial to create on-chain or off-chain transactions to drain its funds. It is therefore critically important that you protect it from unauthorized access.
Security is a holistic effort, meaning that you have to secure every layer of a system. As the saying goes: the chain is only as strong as the weakest link. This is an important concept in information security and we will apply it to our node.
Despite all security measures you will take, remember that the Lightning Network is an early-stage experimental technology and there are likely to be exploitable bugs in the code of any project you use. *Do not put more money than you are willing to risk losing on the Lightning Network.*
==== Operating system security
Securing an operating system is a vast topic that is beyond the scope of this book. However, we can establish some basics concepts at a high level.
To secure your operating system, here are some of the top items to consider:
. Provenance - Start by ensuring that you are downloading the correct operating system image and verify any signatures or checksums before installing it.
. Maintenance - Make sure that you keep your operating system up to date. Enable automated daily or weekly installation of security updates.
. Least Privilege - Set up users for specific processes and give them the least access needed to run a service. Do not run processes with admin privileges (e.g. root).
. Process Isolation - Use the operating system features to isolate processes from each other.
. File System Permissions - Configure the file system carefully, on the least-privilege principle. Do not make files readable or writeable by everyone.
. Strong Authentication - Use strong randomly generated passwords, or whenever possible public-key authentication e.g. with Secure Shell (SSH) instead of passwords.
. Two-factor Authentication (2FA) - Use two-factor authentication wherever possible, including Universal 2-Factor (U2F) with hardware security keys.
. Backup - Make backups of your system, but make sure you protect the backups with encryption too.
. Vulnerability & Exposure Management - Use remote scanning to ensure you have minimized the attack surface of your system. Close any unnecessary services or ports.
This is a list of basic security measures, not an exhaustive list.
==== Node Access
Your Lightning node will expose a Remote Procedure Call (RPC) Application Programming Interface (API). This means that your node can be controlled by commands sent to a specific TCP port. Control of that RPC API is achieved by some form of user authentication. Depending on the type of Lightning node you set up, this will either be done by username/password authentication, or by a mechanism called an authentication _macaroon_. A macaroon is a more sophisticated type of cookie, as the name implies. Unlike a cookie, it is cryptographically signed and can express a set of access capabilities.
For example, LND uses macaroons to grant access to the RPC API. By default the LND software creates three macaroons with different levels of access, called +admin+, +invoice+ and +readonly+. Depending on which macaroon you copy and use in your RPC client, you either have "readonly" access, "invoice" access (which includes the "readonly" capabilities), or "admin" access which gives you full control. There's also a macaroon "bakery" function in LND that can construct macaroons with any combination of capabilities with very fine-grained control.
If you use a username/password authentication model, make sure you select a long and random password. You will not have to type this password often as it will be stored in the configuration files. You can therefore pick one that cannot be guessed. Many of the examples you will see include poorly chosen passwords and often people copy these into their own systems, giving an easy backdoor to anyone. Don't do that. Use a password manager to generate a random alpha-numeric password. Since certain special characters can interfere with the command line (e.g. $/!*\&%), it is best to avoid them for passwords that will be used in a shell environment and may end up being passed as command-line parameters.
A plain alphanumeric sequence that is longer than 12 characters and randomly generated is usually sufficient. If you plan to store large amount of money on your Lightning node and are concerned about remote brute-force attacks, select a length of more than 20 characters that makes such attacks practically infeasible.
=== Node and channel backups
A very important consideration when running a Lightning node is the issue of backups. Unlike a Bitcoin wallet, where a BIP39 mnemonic phrase can recover all the state of the wallet, in Lightning this is not sufficient.
Lightning wallets do use a BIP39 mnemonic phrase backup for the on-chain wallet. However, due to the way channels are constructed, the mnemonic phrase is not sufficient to restore a Lightning node. An additional layer of backup is needed, which is called the _Static Channel Backup (SCB)_. Without a SCB, a Lightning node operator may lose all the funds that are in channels if they lose the Lightning node data store.
[WARNING]
====
Do not fund channels until you have created a system to continuously backup your channel state. Your backups should be moved "offsite" to a different system and location from your node, so that they can survive a variety of system failures (power loss, data corruption etc.) or natural disasters (flood, fire etc.)
====
Static Channel Backups are not a panacea. First, the state of each channel needs to be backed up every time there is a new commitment transaction. Second, restoring from a channel backup is dangerous. If you do not have the _last_ commitment transaction and you accidentally broadcast an old (revoked) commitment, your channel peer will assume you are trying to cheat and take the entire channel balance with a penalty transaction. To make sure you are closing the channel, you need to do a cooperative close. But a malicious peer could mislead your node into broadcasting an old commitment during that cooperative close, thereby cheating you by making your node inadvertently try to "cheat".
Additionally, the backups of your channels need to be encrypted to maintain your privacy and your channel security. Otherwise, anyone who finds the backups can not only see all your channels, they could use the backups to close all your channels in a way that hands over the balance to your channel peers.
SCB are therefore a weak compromise because they swap one type of risk (data corruption or loss) for another type of risk (malicious peer). To restore from a static channel backup, you have to interact with your channel peers and hope they don't try to cheat (either by giving you an old commitment, or by fooling your node into broadcasting a revoked commitment so they can penalize you). Ultimately, this is less of a risk than the risk of losing all funds committed to a channel because of data corruption. Since data corruption leads to the same outcome as if every one of your peers cheat, you are better off backing up and taking the chance that some of your peers will act honestly.
Channel backup mechanisms are still a work-in-progress and a weakness in most Lightning implementations.
==== Static Channel Backups (SCB)
At the time of writing this book, only LND offers a built-in mechanism for channel backups. Eclair has no backup on the server side, although Eclair mobile does offer optional backup to a Google Drive. C-lightning recently merged the necessary interfaces for a plugin to implement channel backups, but there is no agreed on backup mechanism.
File-based backups of the Lightning node databases are partial solution, but you run the risk of data corruption because those backups may not reliably catch the latest state commitments. It is much better to have a backup mechanism that is triggered every time there is a state change in a channel, ensuring data consistency.
To set up static channel backups in LND, set the +backupfilepath+ parameter, either on the command-line or in the configuration file. LND will then save an SCB file in that directory path. Of course, that's only part of the solution. Now, you have to setup a mechanism that copies that file off-site each time it changes, which is beyond the scope of this book. Any sophisticated backup solution should be able to handle this.
==== Hot wallet risk
As we've discussed previously, the Lightning Network consists of a network of _hot wallets_. The funds you store in a Lightning wallet are *online all the time*. You should not store large amounts in a Lightning wallet, as it is quite vulnerable. Large amounts should be kept in a cold wallet that is not online and transacts on-chain.
But given that warning, you may still find you have a significant amount of money in a Lightning wallet. Such is the case for example if you use a Lightning node for e-commerce operations like running a shop, while not having significant expenses you can pay on Lightning. If that is the case, your wallet will likely receive funds often but send funds rarely. You will therefore have two problems simultaneously: Your channels will be imbalanced inwards (more local balance than remote balance) and you will have too much money in the wallet. Fortunately, you can solve both of these problems simultaneously.
Let's look at some of the solutions you can use to reduce the funds exposed on a hot wallet.
==== Sweeping funds
If you Lightning wallet balance becomes too large for your risk appetite, you will need to "sweep" funds out of the wallet. You can do so in three ways: on-chain, off-chain, and loop-out. Let's look at each of those in the next few sections.
===== On-chain sweep
Sweeping funds on-chain, is accomplished moving the funds from the Lightning wallet to a Bitcoin wallet (presumably a more secure hardware wallet or cold storage). You do that by closing channels. When you close a channel, all funds from your local balance are "swept" to a Bitcoin address. The Bitcoin address for on-chain funds is usually generated by your Lightning wallet so it is still a hot-wallet. You may need to do an additional transaction to move the funds to a more secure address, such as one generated on your hardware wallet.
Closing channels will incur an on-chain fee and will reduce your Lightning node's capacity and connectivity. However, if you run a popular e-commerce node you will not lack incoming capacity and can strategically close channels with large local balances, essentially "batching" your funds for movement on-chain. You may need to use some channel re-balancing techniques (see <<channel_rebalancing>>) before you close channels to maximize the benefit of this strategy.
===== Off-chain sweep
Another technique you can use involves running a second Lightning node that is not advertised on the network. You can establish large capacity channels from your public node (e.g. the one running you shop) to your less-public (hidden) node. On a regular basis, "sweep" funds by making a Lightning payment to your hidden node.
The avantage of this technique lies in the fact that the Lightning node that receives payments for your shop will be publicly known. This makes it a target for hackers, as any Lightning node associated with a shop would be assumed to have large amounts of money in its balance. A second node that is not associated with your shop will not easily be identified as a valuable target.
As an additional measure of security, you can make your second node a hidden TOR service, so that it's IP address is not known. That further reduces the opportunity for attack and increases your privacy.
You will need to setup a script that runs at regular intervals that creates an invoice on your hidden node and then pays that invoice from your shop node, thereby shifting the funds over to your hidden node.
Keep in mind that this technique does not move funds into cold storage. Both Lightning nodes are hot wallets. The effect of this is to move funds from a very well known hot wallet to an obscure hot wallet.
===== Submarine swap sweep
Another way to reduce your Lightning hot-wallet balance is to use a technique called a _submarine swap_. Submarine swaps, conceptualized by co-author Olaoluwa Osuntokun and Alex Bosworth, allow on-chain Bitcoin to be exchanged for Lightning payments and vice versa.
A node operator can initiate a submarine swap and send all available channel balance to the other party, who will then send them on-chain Bitcoin in exchange.
In the future this could be a paid service offered by nodes on the Lightning Network who advertise exchange rates or charge a flat fee for the conversion.
The advantage of a submarine swap for sweeping funds is that we do not close a channel. That means that we preserve our channel capacity and re-balance our channels through this operation. As we send a Lightning payment out, we shift balance from local to remote on one or more of our channels. Not only does that reduce the balance exposed in our node's hot wallet, it also increases the balance available for future incoming payments.
Now, you could do this by trusting the intermediary to act as a gateway for your funds and not steal them. But in the case of a submarine swap, the operation does not require trust. Submarine swaps are non-custodial _atomic_ operations. That means that the intermediary cannot steal your funds, because the on-chain payment depends on the completion of the off-chain payment and vice-versa. We will discuss submarine swaps in more detail in <<submarine_swaps>>.
===== Submarine swaps with Loop
One example of a submarine swap service is _Loop_ by Lightning Labs, the company that builds the LND Lightning node. Loop comes in two variations: _Loop In_, which accepts a Bitcoin on-chain payment and converts it into a Lightning off-chain payment and _Loop Out_, which converts a Lightning payment to a Bitcoin payment.
[NOTE]
====
To use Loop, you must be running an LND Lightning node.
====
For the purpose of reducing the balance of our Lightning hot wallet, we would use the Loop Out service. To use the Loop service, we need to install some additional software on our node. The Loop software runs alongside your LND node and provides some command-line tools to execute submarine swaps. You can find the Loop software and installation instructions here:
https://github.com/lightninglabs/loop
Once you have the software installed and running, a Loop-Out operation is as simple as running a single command:
----
loop out --amt 501000 --conf_target 400
Max swap fees for 501000 sat Loop Out: 25716 sat
Regular swap speed requested, it might take up to 30m0s for the swap to be executed.
CONTINUE SWAP? (y/n), expand fee detail (x): x
Estimated on-chain sweep fee: 149 sat
Max on-chain sweep fee: 14900 sat
Max off-chain swap routing fee: 10030 sat
Max no show penalty (prepay): 1337 sat
Max off-chain prepay routing fee: 36 sat
Max swap fee: 750 sat
CONTINUE SWAP? (y/n): y
Swap initiated
Run `loop monitor` to monitor progress.
----
Note that your maximum fee, which represents a worst-case scenario, will depend on the confirmation target that you select.
=== Lightning node uptime and availability
Unlike Bitcoin, Lightning nodes need to be online almost continuously. Your node needs to be online to receive payments, open channels, close channels (cooperatively) and monitor protocol violations. Node availability is such an important requirement in the Lightning Network, that it is a metric used by various automatic channel management tools (e.g. autopilot) to decide with which nodes to open channels. You can even see "availability" as a node metric on popular node explorers such as +1ml.com+.
Node availability is especially important because of potential protocol violations (i.e. revoked commitments). While you can afford short interruptions (hour or days), you cannot have your node offline for longer periods of time without risking loss of funds.
Keeping a node online is not easy, as various bugs and resource limitations will occasionally cause downtime. Especially if you run a busy and popular node, you will run into limitations of memory, swap space, number of open files, disk space etc. A whole host of different problems will cause your node or your server to crash.
==== Monitoring node availability
Monitoring your node is an important part of keeping it running. You need to monitor not only the availability of the computer itself, but also the availability and correct operation of the Lightning node software.
There are a number of ways to do this, but most require some customization. You can use generic infrastructure monitoring or application monitoring tools, but you have to customize them specifically to query the Lightning node API, to ensure it is running, synchronized to the blockchain and connected to channel peers.
There is a specialized service that offers Lightning node monitoring, using a Telegram bot to notify you of any interruptions in service. This is a free service, though you can pay (over Lightning of course) to get faster alerts. Find more information at:
https://lightning.watch
Over time, we expect more third-party services to provide specialized Lightning node monitoring, most likely charging a micro-payment. Perhaps such services and their APIs will become standardized and be directly supported by Lightning node software.
==== Watchtowers
Watchtowers are a mechanism for outsourcing the monitoring and penalty resolution of Lightning protocol violations.
As we mentioned in previous chapters, the Lightning protocol maintains security through a penalty mechanism. If one of your channel partners broadcasts an old commitment transaction, your node will need to exercise the revocation clause and broadcast a penalty transaction to avoid losing money. But if your node is down during the protocol violation, you might lose money.
To solve this problem, we can use one or more _watchtowers_ to outsource the job of monitoring protocol violations and issuing penalty transactions. There are two parts to a watchtower setup: a watchtower server (or simply "watchtower") that monitors the blockchain and a watchtower client that asks the watchtower server for monitoring service.
Watchtower technology, is still in the early stages of development and is not widely supported. However there are some experimental implementations that you can try, below.
LND can run both a watchtower server and a watchtower client. You can activate the watchtower server by adding the following configuration options:
----
[watchtower]
watchtower.active=1
watchtower.towerdir=/path_to_watchtower_data_directory
----
You can use LND's watchtower client by activating it in the configuration and then using the command-line to connect it to a watchtower server. The configuration is:
----
[wtclient]
wtclient.active=1
----
LND's command-line client +lncli+ shows the following options for managing the watchtower client:
----
$ lncli wtclient
NAME:
lncli wtclient - Interact with the watchtower client.
USAGE:
lncli wtclient command [command options] [arguments...]
COMMANDS:
add Register a watchtower to use for future sessions/backups.
remove Remove a watchtower to prevent its use for future sessions/backups.
towers Display information about all registered watchtowers.
tower Display information about a specific registered watchtower.
stats Display the session stats of the watchtower client.
policy Display the active watchtower client policy configuration.
OPTIONS:
--help, -h show help
----
C-lightning has the API hooks necessary for a watchtower client plugin, though no such plugin has been implemented yet.
Finally, a popular standalone watchtower server is The Eye of Satoshi (TEOS), which can be found here:
https://github.com/talaia-labs/python-teos
=== Channel management
In order to participate on the Lightning Network you need to have payment channels on your Lighting networknode.
It might be confusing to decide with whom to open channels and which parameters to set for the specific channel policies.
An obvious strategy might be to open channels with shops where you want to buy products.
But what happens if those shops do only accept large channels which are beyond your funds or do not accept public channels at all?
Also if you are not a consumer but a vendor you want to receive payments on the Lightning Network.
Therefore you need others to open channels with you or you need to open channels and use submarine swaps and rebalancing techniques to provide yourself with the ability to receive payments.
Both techniques will be discussed later in this chapter.
==== Autopilots
As a Lightning node operator, one of the recurring tasks you will need to perform is management of your channels. This means opening outbound channels from your node to other nodes, as well as getting other nodes to open inbound channels to your node. In the future, cooperative channel construction may be possible, so you can open symmetric channels that have funds committed on both ends. For now however, new channels only have funds on one end - on the originator's side. So to make your node _balanced_ with both inbound and outbound capacity, you need to open channels to others and entice others to open channels towards your node.
==== Rebalancing
==== Opening outbound channels
==== submarine swaps
As soon as you get your Lightning node up and running, you can fund its Bitcoin wallet and then start opening channels with those funds.
==== Fees and Centrality
This task can be automated, somewhat, with the use of an _autopilot_, which is software that opens channels automatically based on some heuristic rules. Autopilot software is still relatively new and it doesn't always select the best channel partners for you. It might be better, especially in the beginning, to open channels manually.
You must choose channel partners carefully, as your node's ability to send payments depends on who your channel partners are and how well connected they are to the rest of the Lightning Network. You also want to have more than one channel - so your node isn't susceptible to single point of failure. Since Lightning now supports multi-path payments, you can split your initial funds into several channels and route bigger payments by combining their capacity. Don't make your channels too small, however. Since you need to pay Bitcoin transaction fees to open and close a channel, the channel balance should not be so small that the on-chain fees consume a big portion. It's all about balance!
To summarize:
* Find a few well connected nodes
* Open more than one channel
* Don't open too many channels
* Don't make the channels too small
One way to find well connected nodes is to open a channels to merchants selling something on the Lightning Network. These nodes tend to be well funded and well connected. So when you are ready to make your first payment, you can open a channel directly to the merchant's node. The merchant's node ID will be in the invoice you will receive when you try to buy something, so that makes it easy.
Another way to find well connected nodes is to use a Lightning Explorer, such as +1ml.com+ and browse the list of nodes sorted by channel capacity and number of channels. Don't go for the biggest nodes, as that encourages centralization. Go for a node in the middle of the list, so that you can help them grow.
===== Autopilot
The task of opening channels can be automated, somewhat, with the use of an _autopilot_, which is software that opens channels automatically based on some heuristic rules. Autopilot software is still relatively new and it doesn't always select the best channel partners for you. It might be better, especially in the beginning, to open channels manually.
Autopilots currently exist in 3 forms.
- Originally lnd published an autopilot that is fully integrated with lnd and runs constantly in the background while it is turned on.
- lib_autopilot.py can technically offer autopilot computations on the gossip and channel data from any implementation.
- A clighting plugin based on lib_autopilot.py exists that provides an easy to use interface for c-lightning users.
The main thing to watch out when running the lnd autopilot is that as soon as it is turned on via the config file it will automatically run in the background and it will start to open channels if you have onchain outputs in your lnd wallet.
If you want to have full controll over the bitcoin transactions that you make and the channels that you open make sure to turn off the autopilot before you load your lnd wallet with bitcoin funds.
If the autopilot was previously turned on you might have to restart your lnd before you top up your wallet with an onchain transaction or before you lose channels which effectively gives you on chain funds again..
Another thing to keep in mind is to set the most important config values if you want to run the autopilot.
here you can find an example configuration:
----
[lnd-autopilot]
autopilot.active=1
autopilot.maxchannels=40
autopilot.allocation=0.70
autopilot.maxchansize=500000
autopilot.minchansize=5000000
autopilot.heuristic=top_centrality:1.0
----
This config file would activate the autopilot.
It would open channels as long as the following two conditions are met:
1. The amount of channels that your node currently has open does not exceed 40.
2. Not more than 70% of your total funds are offchain in payment channels.
The numbers 40 and 0.7 are chosen completely arbitrary here as we cannot really make a recommendation that goes for everyone about how many channels one should have and how many percent of their funds should be off chain.
The autopilot in lnd will not take into account to run when onchain fees are low.
Thus you might spend quite some funds on transactions fees.
It will make channel recommendations when ever the conditions are met and will directly try to open a channel by using the appropriate fees.
According to this configuration file channels will be between 5 and 50 mBTC.
The size is actually as most of the time on the lightning network depicted in satoshi but we converted the amount for you.
It has become clear that too small channels below 1 mBTC are not very usefull and we do not recommend to open too small channels.
With the wider adoption of multipath payments smaller channels are less of a burden but we still stick to our recommendation.
The c-lightning plugin works very differently in comparison to the lnd autopilot.
Not only from the used algorithms to make the recommendations - which we do not discuss here - but also from the user interface.
First you will need to download the autopilot plugin from the c-lightning plugin repository at https://github.com/lightningd/plugins/tree/master/autopilot and activate it.
We have already explained how to activate plugins in c-lightning.
The autopilot in c-lightning now gets 3 configuration values which can be set in the config file or as command line arguments when you start lightningd
----
[c-lightning-autopilot]
autopilot-percent=75
autopilot-num-channels=10
autopilot-min-channel-size-msat=100000000msat
----
These values are the actual default config and you do not need to set them at all.
The autopilot will not autoamatically run in the background like lnd.
Instead you have to start a run specifically with `lightning-cli autopilot-run-once` if you do not want the autopilot to open the recommended channels.
But if you want it to just make recommendations to you from which you can handpick the nodes you can add the optional `dryrun` argument.
A key difference between the lnd and the c-lightning autopilot is that the c-lightning autopilot will also make a recommendation for the channel size.
For example if the autopilot recommends to open a channel with a small node that only has small channels it will not recommend to open a large channel.
However if it opens a channel with a well connected node that also has many large channels it will probably recommend a larger channel size.
As you can see the c-lightning autopilot is not as automatic as lnd but gives you a little bit more control.
These differences are of personal preference and could actually be the deciding factor for you why you chose one implementation over the other.
Keep in mind that current autopilots will mainly use public information from the gossip protocol about the current topology of the lightning network.
It is very obvious that your personal requirements for channels can and will only be reflected to a certain degree.
More advanced autopilots would use information that your node already has gatherd by running in the past about routing successes and who you have paid in the past.
Such autopilots might in the future also use the knowledge and statistics that they have collected to make recommendations to close channels and allocate the funds in another way.
We mention these things to you as a word of warning to not really to heavily on the autopilot feature at the time of writing this book.
==== Getting inbound liquidity
In the current design of the Lightning Network, it is more typical for users to obtain outbound liquidity _before_ obtaining inbound liquidity.
They will do so by opening a channel with another node, and more often they'll be able to spend Bitcoin before they can receive it.
There are three typical ways of getting inbound liquidity:
* Open a channel with outbound liquidity, and then spend some of those funds. Now the balance is on the other end of the channel, which means it can be used to send payments to you.
* Ask someone to open a channel to your node. Offer to reciprocate, so that both of your nodes become better connected and balanced.
* Use a submarine swap (e.g. Loop-In) to exchange on-chain BTC for an inbound channel to your node.
* Pay a third party service to open a channel with you. Several such services exist, some charging a fee to provide liquidity, some for free.
Here's a list of currently available liquidity providers who (for a fee) will open a channel to your node:
* Bitrefill's Thor service at https://www.bitrefill.com/thor-lightning-network-channels/
* Lightning To Me at https://lightningto.me/
* LNBig at https://lnbig.com/
* Lightning Conductor at https://lightningconductor.net/channels
Creating inbound liquidity is challenging from both a practical and user experience perspective. Inbound liquidity does not happen automatically, so you have to find ways to build it for your node. This asymmetry of payment channels is also not intuitive - after all in most payment systems you get paid first (inbound) before you pay others (outbound).
The challenge of creating inbound liquidity is most noticeable if you are a merchant or sell your services for Lightning payments. In that case, you need to be vigilant to ensure that you have enough inbound liquidity to be able to continue to receive payments. What if there is a surge of buyers on your store, but they can't actually pay you because there's no more inbound capacity?
In the future these challenges can be partially mitigated by the implementation of dual-funded channels, which are funded from both sides and offer balanced inbound and outbound capacity. This could also be mitigated by more sophisticated autopilot software, which could request and pay for inbound capacity as needed.
Ultimately, however, Lightning users need to be strategic about channel management to ensure that sufficient inbound capacity is available to meet their needs.
==== Closing channels
As discussed earlier in the book, Mutual Close is the preferred way of closing a channel, however there are instances where a Force Close is neccessary.
Some examples:
* Your channel partner is offline and cannot be contacted to initiate a mutual close
* Your channel partner is online, but is not responding to requests to initiate a mutual close
* Your channel partner is online, and your nodes are negotiating a mutual close, but they become stuck and cannot reach a resolution.
==== Re-balancing channels
In the course of transacting and routing payments on Lightning, the combination of inbound and outbound capacity can become unbalanced.
For example, if one of your channel partners is frequently routing payments through your node, you will exhaust the inbound capacity on that channel, while also exhausting the outbound capacity on the outgoing channels. Once that happens, you can no longer route payments through that route.
There are many ways to re-balance channels, with different advantages and disadvantages. One way is to use a submarine swap (e.g. Loop-out), as described previously in this chapter. Another way to re-balance is to wait for routed payments that flow in the opposite direction. If your node is well connected, when a specific route becomes exhausted in one direction, the opposite-direction route becomes available. Other nodes may "discover" that opposite-direction route and use it as part of their payment path, re-balancing the funds again.
Another way to re-balance channels is to purposely create a _circular route_ that sends a payment from your node to your node, via the Lightning Network. By sending a payment out on a channel with large local capacity and arranging the path so that it returns to your node on a channel with large remote capacity, both of those channels will become more balanced. An example of a circular route re-balancing strategy can be seen in <<circular-rebalancing>>.
[[circular-rebalancing]]
.Circular route re-balancing
image::images/circular-rebalancing.png[]
Circular re-balancing is supported by most Lightning node implementations and can be done on the command-line or via one of the web management interfaces such as _Ride the Lightning (RTL)_ (see <<rtl>>).
Channel rebalancing is a complex issue that is the subject of active research and covered in more detail in <<rebalancing_channels>>.
=== Routing fees
Running a Lightning node allows you to earn fees by routing payments across your channels. Routing fees are generally not a significant source of income and dwarfed by the cost of operating a node. For example, on a relatively busy node that routes a dozen payments a day, the fees amount to no more than 2000 satoshis.
Nodes compete for routing fees by setting their desired fee rate on each channel. Routing fees are set by two parameters on each channel: a fixed _base fee_ that is charged for any payment and an additional variable _fee rate_ that is proportional to the payment amount.
When sending a Lightning payment, a node will select a path so as to minimize fees, minimize hops or both. As a result, a routing fee market emerges from these interactions. There are many nodes that charge very low or no fees for routing, creating downward pressure on the routing fee "market".
If you make no choices, your Lightning node will set a default base fee and fee rate for each new channel. The default values depend on the node implementation you use. Routing fees are set as millisatoshi (thousandths of a satoshi) for the base fee and millionths per satoshi for the proportional fee rate. For example, a base fee of 1000 (millisatoshi) and fee rate of 10 (millionths) would result in the following charges for a 100,000 satoshi payment:
[latexmath]
====
P = 100,000
F_base = 1000 millisatoshi
F_rate = 10/1,000,000 * payment size
F_total = F_base + F_rate * P
\Rightarrow F_total = 1 satoshi + 100 satoshi
\Rightarrow F_total = 101 satoshi
====
Broadly speaking, you can take one of two approaches to routing fees. You can route lots of payments with low fees, making up for low fees by lots of volume. Or you can choose to charge higher fees, expecting you will route fewer payments. If you choose to set higher fees your node will be selected only when other cheaper routes don't exist. Therefore, you will route less often but earn more per successful routing.
For most nodes, it is usually best to leave the default routing fee values, so that your node is competing on a mostly level playing field with other nodes who use the default values.
You can also use the routing fee settings to re-balance channels. If most of your channels have the default fees but you want to rebalance a channel, just decrease the fees on that channel to zero or very low rates. Then sit back and wait for someone to route a payment over your "cheap" route and re-balance your channels for you as a side-effect.
=== Node management
Managing your Lightning node on the command-line is obviously not easy. It gives you the full flexibility of the node's API and the ability to write your own custom scripts to achieve whatever goals you want. But if you don't want to deal with the complexity of the command line and only need some basic node management capabilities, you should consider installing a web-based user interface that makes node management much easier.
There are a number of competing projects that offer web-based Lightning node management. Some of the most popular ones are described below.
==== Ride The Lightning (RTL)
RTL is web graphical user interface to help users to manage lightning node operations for the three main lightning implementations (LND, c-lightning and Eclair), RTL is an open source project developed by Suheb, Shahana Farooqi and many other contributors. You can find the RTL software here:
https://github.com/Ride-The-Lightning/RTL
Here's an example of RTL's web interface, as provided on the project repository:
.Example RTL web interface
image::images/RTL-LND-Dashboard.png[]
==== LNDMon
Lightning Labs, the makers of LND provide a web based graphical user interface called +lndmon+ to monitor the various metrics of an LND Lightning node. The lndmon interface only works with LND nodes, and is a read-only interface that does not allow you to manage the node directly (e.g. open channels and make payments). Find lndmon here:
https://github.com/lightninglabs/lndmon
=== Conclusion
As you grow and maintain your node you will learn a lot about the Lightning Network. It is a challenging but rewarding task. Mastering these skills will allow you to contribute to the growth and development of this technology and the Lightning Network itself. You will also gain the ability to send and receive Lightning payments with the greatest degree of control and ease, as your node will be a central part of the network's infrastructure and not just a participant on the edges.

View File

@ -19,3 +19,48 @@ Relevant questions to answer:
* Can fails or settles be safely pipelined on the network?
* How does a node send an error back to the sender without knowing who they are?
* What dangers exist w.r.t time-locks and timely on-chain confirmation?
=== Why is it important to keep the packet fixed sized at all times?
As we have already discussed earlier in the chapter, the "onion" packet transmitted via the routing nodes has multiple layers.
The packet starts at the sender of the payment and reaches the first routing node, who peels off the first layer.
This layer gives it information about the payment being transmitted, such as who is the next routing node to pass it to.
It then passes this packet to the second routing note, who peels off the second layer, and so on until the final routing node (i.e. the recipient of the payment) is reached.
We know from the current design of the Lightning Network that there can be a maximum of 20 hops per Lightning payment.
We can think of the data relating to each of these possible 20 hops as one of 20 "layers" of the packet.
If there are 6 hops in the payment, then the first 6 layers of the packet contain information about the first six routing nodes, and the remaining 14 layers contain junk.
If there are 20 hops in the payment, then all 20 layers of the packet contain information about the twenty routing nodes.
Let us now consider the adverse case, where the packet size is NOT fixed i.e. every time a layer is peeled off of the packet, the size of the packet reduces.
If a malicious routing node receives a packet, it can use the size of the packet to estimate how many "layers" are left in this packet.
If it receives a packet and estimates that there are 20 layers left, i.e. a full packet, then it knows that the node who sent it the packet is the originator of the payment.
If it receives a packet and estimates that there is 1 layer left, then it knows that the node that it is sending the packet to is the final recipient of the payment.
Even if it receives a packet and estimates that there are 2 layers left, then it knows that it is either transmitting the packet to the last routing node (i.e. the payment recipient) or it is transmitting the packet to the second last routing node before the recipient.
It can graph out all the channels of the second last routing node and it knows that the final recipient of the payment is either the second last routing node itself, or one of their channel partners (we ignore the case of private channels).
In all cases, some privacy of the payer and the recipient are lost.
[[malicious-routing-diagram]]
.If the Malicious node knows there are 2 layers left, then it knows that the payment recipient is either Node 19 (and there were only 19 hops) or one of Node 19's channel partners
image:images/malicious-routing-diagram.PNG["If the Malicious node knows there are 2 layers left, then it knows that the payment recipient is either Node 19 (and there were only 19 hops) or one Node 19's channel partners"]
We can extend this example.
Imagine a malicious entity sets up multiple Lightning nodes connected to other well-connected nodes, and it also connects itself across popular payment routes.
These malicious Lightning nodes would then be popular routing choices for those wanting to send payments, especially if they set their routing fees low.
The malicious nodes can then capture the data of all packets that pass through their routing nodes.
With additional information, such as the names of the other routing nodes, it could infer who is making these payments, who is receiving them, and for what amounts.
footnote:[Note that not all Lightning nodes are anonymous.
It is known, for example, that the nodes "aantonop" and "1.ln.aantonop.com" are owned by the author of this book Andreas Antonopolous.
Furthermore, companies and businesses in this space can claim ownership of a node by publicizing their node's alias and pubkey on their website or social media
If we see, for example, a payment with destination "Bitrefill" with a node pubkey that matches Bitrefill's publicized pubkey, we could infer that someone is making a purchase from Bitrefill.
If we know the prices of their services, we could even infer what they purchased. ]
If it has multiple routing nodes connected to each other, it might even be responsible for transmitting several of the hops in a single payment and could form a more complete picture of the route.
We see this example as spiritually similar to the chain analysis already performed on the Bitcoin network; even an incomplete picture of payments can be used to infer things about the parties involved and potentially de-anonymize them.
Fixing the packet size solves the problem of routing nodes knowing how far a packet is along the route.
Whenever a routing node peels off a layer, it adds another layer of junk data to the end, restoring the packet to its "full" size.
In this way, even if a packet has already been transmitted 18 hops, the 19th routing node would still see that the packet contains enough data for up to 20 hops.
The packet size would not provide useful information to the 19th routing node for it to determine if it was the first routing node or the second last routing node.
If it is not the recipient itself, then it knows only that the recipient is someone it is connected to by up to 20 hops.
Assuming a sufficiently complex network with a large number of nodes, the size of the packet then gives them no useful information about the source or the destination of the payment.

View File

@ -1,44 +1,48 @@
Chapter overview:
* high level description of p2p interaction for the LN
Relevant questions to answer:
* Encrypted P2P Transport:
* What is the noise protocol? How does it differ from TLS? Who created it
- What is the noise protocol? How does it differ from TLS? Who created it
* and what are some of primary design principles?
- and what are some of primary design principles?
* What is an authenticated key exchange?
- What is an authenticated key exchange?
* Why does Noise offer various handshakes? What are some of unique properties certain handshakes offer?
- Why does Noise offer various handshakes? What are some of unique properties certain handshakes offer?
* What is key rotation in the context of a complete handshake? Why is it important?
- What is key rotation in the context of a complete handshake? Why is it important?
* What is "brontide"? How is it used in LN today? How can it be upgraded in the future?
- What is "brontide"? How is it used in LN today? How can it be upgraded in the future?
* LN Message Format:
* What kind of framing is used in the LN message format? What's the max message size and why is it in place?
- What kind of framing is used in the LN message format? What's the max message size and why is it in place?
* What is a varint? Why is it used in the protocol?
- What is a varint? Why is it used in the protocol?
* What are the message types of some of the popular messages in the protocol?
- What are the message types of some of the popular messages in the protocol?
* How can new messages be added in the future?
- How can new messages be added in the future?
* Feature bits:
* What are feature bits in the network, and how+where are they advertised?
- What are feature bits in the network, and how+where are they advertised?
* How can feature bits be used to phase in new features to the protocol?
- How can feature bits be used to phase in new features to the protocol?
* Today, what are some of the major feature bits used in the system?
- Today, what are some of the major feature bits used in the system?
* What's the difference between and end-to-end network upgrade and an internal network upgrade? How's the analogous to the evolution of routers and protocols in the existing internet?
- What's the difference between and end-to-end network upgrade and an internal network upgrade? How's the analogous to the evolution of routers and protocols in the existing internet?
* TLV Message Extensions:
* What does TLV stand for?
- What does TLV stand for?
* How is this related to the existing protobuf message format?
- How is this related to the existing protobuf message format?
* Where are TLV fields used in the protocol today?
- Where are TLV fields used in the protocol today?
* How can TLV fields be used to extend the protocol, existing messages, and the onion itself?
- How can TLV fields be used to extend the protocol, existing messages, and the onion itself?
- Sidenote that TLV can be used by upcoming Instant Messaging chat apps like `Whatsat`, `Sphinx Chat` or `Juggernaut`

View File

@ -1,4 +1,4 @@
Chapter overview:
.Chapter overview:
* How path finding works in the network
Relevant questions to answer:
@ -8,7 +8,499 @@ Relevant questions to answer:
* Why must path finding happen backwards (receiver to sender)?
* How is the information contained in a channel update used in path finding?
* How can errors sent during payment routing help the sender to narrow their search space?
* What is payment splitting? How does it work?
* What is payment splitting? How does it work? What alternatives exist?
* What information can be sent to intermediate and the final node aside from the critical routing data?
* What are multi-hop locks? What addition privacy and security guarantees to they offer?
* How can the flexible onion space be used to enabled packet switching in the network?
==== Finding a path
Payments on the Lightning Network are forwarded along a path of channels from one participant to another.
Thus, a path of payment channels has to be selected.
If we knew the exact channel balances of every channel we could easily compute a payment path using any of the standard path finding algorithms taught in any computer science program.
Actually when we consider multipath payments it is rather a flow problem than a path finding problem.
Since flows consist of several paths we conveniently talk about path finding.
With exact information about channel balances available we could solve those problems in a way to optimize the fees that would have to be paid by the payer to the nodes that kindly forward the payment.
However, as discussed the balance information of all channels is and cannot be available to all participants of the network.
Thus, we need to have one or more innovative path finding strategy.
These strategies must relate closely to the routing algorithm that is used.
As we will see in the next section, the Lightning Network uses a source based onion routing protocol for routing payments.
This means in particular that the sender of the payment has to find a path through the network.
With only partial information about the network topology available this is a real challenge and active research is still being conducted into optimizing this part of the Lightning Network implementations.
The fact that the path finding problem is not fully solved for the case of the Lightning Network is a major point of criticism towards the technology.
The path finding strategy currently implemented in Lightning nodes is to probe paths until one is found that has enough liquidity to forward the payment.
While this is not optimal and certainly can be improved, it should be noted that even this simplistic strategy works well.
This probing is done by the Lightning node or wallet and is not directly seen by the user of the software.
The user might only realize that probing is taking place if the payment is not going through instantly.
The algorithm currently also does not necessarily result in the path with the lowest fees.
=== What is "Source-Based" routing and why does the Lightning Network use it?
Source-based routing is a method of path-finding where the sender (i.e. the source) plans the path from itself, through the intermediary nodes, and finally to the destination.
Once a path has been selected, the sender sends the payment to the first intermediary node, who sends it to the second intermediary node and so on, until it reaches the destination.
While a payment is in-flight along a path, the path typically does not get changed by any of the intermediary nodes, even if a shorter path or a cheaper path (in terms of routing fees) exists.
The Lightning Network uses source-based routing at the time of writing in order to protect user privacy.
As discussed in the chapter on Onion Routing, the intermediary nodes transmitting the payment are not aware of the full path of the payment; they only know the node they received it from and the node they are sending it to.
We also cannot expect the destination to find a path.
Even if it specifies a path in the invoice, that path may no longer be viable by the time the invoice is paid, which could be several minutes or several days later.
The recipient can, however, specify "routing hints" in the invoice that may assist the sender in finding a possible path.
Furthermore, source-based routing comes with some inherent drawbacks.
The sender chooses the path based on their current understanding of the topological map of the Lightning network.
As discussed in previous chapters, this map is necessarily incomplete; the sender may not be aware of all the channels, and even if they are they will almost certainly not know the latest balances in each of the channels.
And even if the sender did have a complete topological map at one point in time, the balances of channels change with every payment, and so in a short space of time the map would become obsolete.
The standard path finding mechanism with source based onion routing that is implemented in all Lightning Network implementations is the following:
. Select an arbitrary path of payment channels which connects sender and receiver of the payment and for which all channels have a capacity of at least the payment amount and accept HTLCs of this amount.
. Construct the onion from destination to sender according to the meta data (basefee, feerate, CLTV delta) of the channels.
. Send out the onion and see if the payment settles by nodes sending back preimages or if the payment fails.
. If the payment fails use this knowledge to select a different path by starting at step 1.
This means that with every attempted payment nodes actually probe the network and will also learn some information about how balances are distributed.
Implementations will usually prioritise cheaper paths or exclude channels which recently have failed.
In that sense the selection is not completely arbitrary.
Still even with such heuristics in place it could still be considered as a random process or random walk through the channel graph.
There can be several reasons why the payment may fail along the way.
For example one of the nodes becomes unreachable, doesn't have the channel balance, can't accept new htlcs, has updated its fees to a higher amount, or the channel is closed in the interim.
Furthermore, there is no guarantee that the route chosen was the cheapest in terms of fees, or if a shorter path existed.
As at the time of writing, this is a design trade-off made to protect user privacy.
=== Paths are constructed from Destination to Source
Let us assume our standard example in which Alice wants to send a payment of 100k satoshi on a path via Bob and Wei to Gloria.
The Path obviously looks like (Alice)-->(Bob)-->(Wei)-->Gloria.
However Bob and Wei will charge routing fees to forward the onion.
As you already know nodes can charge two types of fees.
First the base fee that will be charged for any successfull forwarding and settlement of and htlc.
This fee is constant and does not depend on the amount that the node is supposed to forward.
Additionally nodes might charge a freerate.
The name rate alredy indicates that this fee depends on the amount that a node is supposed to forward.
Let us for the simplicity of assume that the feerate of Bob and Wei is very expensive with 1% for Bob and 2% for Wei.
However Bob and Wei will not take a base fee to keep things simple in our example.
If Alice constructs the Onion she has to include the routing fees as the differnce of the incoming htlc and the outgoing htlc.
Let us assume she falsely computes the following to construct the onions with the routing fees.
Alice knows that 1% of 100k satoshi is 1k satoshi which she belives she should include in Bob's onion.
Similarly she knows that 1% of 100k satoshi is 2k satoshi which she belives she should include in Wei's onion.
She would find out that Bob would not forward the onion, if she believed that she would have to pay a total of 3k satoshi and construct an onion that looks like this:
----
"route": [
{
"id": "Bob",
"channel": "357",
"direction": 1,
"satoshi": 103000,
"dealy": 187,
},
{
"id": "Wei",
"channel": "74",
"direction": 1,
"satoshi": 102000,
"dealy": 183,
},
{
"id": "Gloria",
"channel": "452",
"direction": 0,
"satoshi": 100000,
"dealy": 153,
}
]
}
----
The reason for Bob to not forward the onion is that he expects the incoming amount to be 1% larger then the amount he is supposed to forward.
Thus he would like to have an incoming ammount of `103020` satoshi which is 20 satoshi more than Alice sent him.
According to his fee schedual Bob will have to reject the Onion.
If Alice constructed the onion from she would have computed 1% of the to forward amount correctly as 1% from 102k satoshi which is 1020 sat.
Adding 1020 to the 102000 satoshi that Wei needs to have on his incoming channel will result in the right value of 103020 satoshi that Bob requires.
As the routing fees can increase the amount that is being forwarded even beyond the capacity of small channels it makes sense to start the construction of the onion and the pathfinding from the destination to the sender.
[NOTE]
====
While onions are also constructed from inside to outside and thus start with the destination this is not the reason why pathfinding has to start with the destination node.
====
=== Fundamentals about path finding
Finding a path through a graph is a problem modern computers can solve rather efficiently.
Mostly developers choose bredth first search (if the edges are all of equal weight) or the Dijkstra Algorithm in cases where the edges are not of equal weight.
In our case the weights of the edges could be the routing fees and only edges will be included where the capacity is larger than the amount that is to be sent.
In this basic form pathfinding on the lightning network is very simple and straight forward.
However as we have already discussed in the introduction channel balances cannot be shared with every participant every time a payment takes place as the system would not scale.
Thus our easy computer science problem for which we know a solution turns into a rather complex problem.
We now have to solve a path finding problem with only partial knowledge.
For example we know which edges might be able to forward a payment because their capacity seems big enough but we can't know it for sure
unless we try it out or ask the channel owners directly.
Even if we where able to ask the channel owners directly their balance might change by the time we have asked others, computed a path, constructed an onion and send it along.
Thus we do not only have partial information but the information we have is highly dynamic and might change before we can use it.
One general observation that everyone can easily make is that if every node along a path is able to forward a certain amount of satoshis these nodes will also be able to forward a lower amount of satoshis.
This is why many people intuitively belive that multipath payments might be a good strategy.
Instead of finding one path where every node owns a large amount of liquidity the task is split into smaller ones.
Another reason is of course that the sender of a payment might just not have the amount they wish to send in one single channel but split over several channels.
We leave it to later sections of this chapter to discuss the strengths and weaknesses of multipath payments.
However we note that multi path payments are equivalent to finding a flow between the source and the destination.
While finding flows in a graph with full knowledge that is static is computationally a little bit more heavy than computing a shortest path it still seams to be a feasable problem.
Given the reality of the Lightning Network and the fact that we do not need to compute a max flow we currently do not know if the problem is more or less difficult as finding a path.
It seems to be about equally difficult and the problems are somewhat connect as we will see in the following sections.
=== Probing based path finding algorithm on the Lightning Network
As discussed in order to reliably find a path nodes would need to know the balance of remote payment channels and the balances would have to be static.
As both is not given nodes currently use a probing based algorithm.
In its most basic form the algorithm works as follows:
. Select a random path to the destination node
. Construct and send the onion
. wait for the response of the onion
. If response == preimage -> success
. If response == feilure -> start over from step one.
Nodes will use various sources of information to improve the selection of a random path.
The main source of information is the gossip protocol.
From the gossip protocol a node learns which other nodes exist and which channels have been opened.
This will basically provide a network view that can be used to run graph algorithms that generate plausible paths.
For example a breadth first seach traversal.
The graph algorithm will usually be constrained to channels that have at least the capacity of the amount to be sent.
In practice due to channel reserve and the assumption that the capacity in the channel will not be sitting completely on one side it is saver to prefer larger channels.
The second source of information is the blockchain itself.
If channels are closed this is not announced via the gossip protocol.
However as the funding transaction is encoded by the short channel id of the channel and as it will be spent if the channel is closed nodes have to use this information to update their knowledge about the network of channels.
Another source of information are the past payments themselves.
Onions can return with errors.
Knowing for example that the third hop along a path returns an error means that the first two channels had enough balance and that the third channel - depending on the error - did not have enough balance.
Such edges can be removed from the set of edges similarly to the edges that do not have enough capacity.
Similarly nodes could use such information from previous payment attempts.
It is important that nodes are carefull with this data.
As the capacity information of channels from the gossip protocol and blockchain data is verifiably correct the data from our third source of information can be incorrect.
Nodes might just send an error back because they do not want to reveal balance information.
Also the data might just change over time as the balances in the lightning network are not static and changing with every payment attempt that is being made.
Thus nodes should only use such data if it is not to far in the past or use it only with a certain confidence.
The fourth source of information that the node will use are the routing hints in the BOLT 11 invoice.
Remember that a regular payment process starts with the person who wants to receive money coming up with a random secret and hashing it as the payment hash.
This hash is usually transported to the sender via an invoice.
Invoices usually contain some meta data and in particular routing hints.
This is necessary if the person who wants to be paid does not have announced channels. In that case it will speciefie some unannounced channels within the invoice.
Otherwise the payer would not even be able to find a path to the "hidden" node.
Routing hints might also be used by the receiving node to indicate which public channels have enough inbound capacity for the payment and thus the ability to receive funds.
In general the further away from the originating peer the payment goes the more likely it becomes to select a channel with insufficient balance.
Thus indicating on which channels a node wishes to receive funds would actually be quite nice for the sender.
=== Improvements on Source based onion routing
The probing based approach that is used in the Lightning Network has several flaws.
Sending out an onion usually takes a certain amount of time.
The time depends on how many hops the onion is supposed to be forwarded and of course on the speed of nodes processing the onion and the topology on the web.
In the following diagram you can see how the time for onions to return in general increase with the amount of hops that the onion has encoded.
[[pathfinding-probing]]
.Research showing the times that onions take to return depending on the distance (CC-BY-SA Tikhomirov, Sergei & Pickhardt, Rene & Biryukov, Alex & Nowostawski, Mariusz. (2020). Probing Channel Balances in the Lightning Network.)
image:images/probingtimes.ppm[]
Of course this diagram was just a snapshot from an experiment in early 2020 and things might change.
We can learn from the Diagram that payments can take several seconds while the node tries to probe several paths.
This is due to the fact that the fact that single onions can easily take a few seconds to return and a sender might have to send several onions in a row while probing for a sucessfull path.
in generall this will still be much faster than waiting for confirmations on a bitcoin block but it is not sufficient in an environment where payments need to settle fast.
If people stand in a line at the cash register for their groceries this would be such a setting.
Thus lightnign developers
==== Probing based improvements
The last source of information that nodes could use is to probe the network themselves.
Instead of making the actual payment nodes could send out many fake payments which are onions to a random payment hash.
Given the properties of the hashfunction it is save to assume that noone would know the preimage.
In that sense the payment will only fail at the destination and nodes can learn a lot about the balances.
Of course this produces spam and heavy load on the network and it is not recommended that nodes do this.
However participants cannot really be stopped from doing this.
unless channel partners see a lot of traffic coming on a channel which always fails and never settles.
In this case channel partners could decide to close the channel.
[Note]
====
We want you to understand that Lightning Network by design does not have perfect privacy.
While a lot of information is not easily accessible every time a path is probed the node learns something about the state of the network at that point in time.
====
We note that one should not send two onions at the same time with the same payment hash for which the recipient knows the preimage.
As long as the onion is being processed and routed the payment is out of controll of the sender.
In case two onions are sent at the same time the recipient could very well release the preimage twice and get paid twice.
That was the reason why probing should be conducted with a fake payment hash.
in that case the sender can probe concurrently as long as the sender has enough funds to pay for all the HTLCs.
However there is a problem.
Assume an onion returns indicating that the payment hash was unknown to the recipient but otherwise the path would have been possible.
The sender would now use this exact path to send the payment with the corrent payment hash.
Meanwhile the balances of some channels along the path might have changed and the path does not exist anymore.
In this case the sender would have to start from the beginning all over again.
Admittedly the risk for this to happen is rather small but there is a chance.
A better way and potential improvement for the future of the Lightning network are stuckless payments.
There is a proposal for a system called stuckless payments that receives high appriciation by developers.
This proposal will probably not be implemented before the lightning network switches from Hashed Timelocked Contracts to Point time locked contracts which won't come before Schnorr Signatures are activated on the Bitcoin Network.
What stuckless payments can do is to give controll back to the sender of an Onion.
Without explaining the details here we just say that the sender can now cancle an onion.
This is great for redundant and concurrent pathfinding.
The sender could send out several real onions.
The first ones that arrives at the recipient will be settled.
All others will be cancled.
This increases the usuability of the Lightning Network on several levels.
One advantage is that the sender can try several paths at the same time.
The second advantage is that the path is locked after it is found and until it is settled.
This means that the sender can either cancle the onion or help to release the preimage (as senders have to do with the stuckless payment construction)
In particular the probed path cannot change or used by other routing requests between probing and setting up the htlcs that are used to fullfill the request.
The time for a a successfull payment will reduce drastically.
The distadvante is that the sender has to lock more bitcoin during the path finding process.
Due to timeouts these bitcoin can be locked for a couple of days before they can be used again.
This should not happen too often.
Also it utilizes more resources of other nodes.
==== Multipath payments
Everyone can easily make the following observation:
----
Let's say your node has discovered a path along which a certain amount of Satoshis for example 100k could be routed.
Then any onion along that path on the same time with an lower amount of Satoshis would also have been successfull.
One can easily conclude that lower amounts have a higher likelyhood to be routed successfully to the destination than larger amounts.
----
Researchers and developers have already tested and confirmed this emperically over and over again.
With this assumption in mind it seems natural to split a payment amount and send several smaller payments along various paths.
With if a small payment fails it will be retried and probed just as one would do with a single larger payment.
While the main idea is very easy to understand we want to discuss the details, advantages and disadvantages of this mechanism in the following.
Usually a receiving node will see an incoming HTLC for a certain payment hash.
If the onion signals that the node is the final recipient and that the amount of the HTLC is less than the one specified in the invoice the node would not accept the HTLC and send back an erring onion.
However with the TLV format of onions a sender can specify the total_amount of the payment which can be bigger than the HTLC.
The recipient can safely accept the HTLC and wait for more HTLCs to arrive.
In this way all parts of the payment will use the same payment hash.
The recipient will only release the preimage if the sum of all incoming HTLCs is at least the speciefied payment amount.
[Note]
====
**Multi path or multi part payments?** You might have realized that we named the chapter multipath payments but mentioned in the last paragraph that such a payment consists of several parts.
The protocol specification uses the abbrivation MPP for multi part payments.
This is in fact always correct as all parts could technically - though this would not make much sense - be delivered over the same path.
As we are introducing MPP in the pathfinding section of the book and as they are also used for path finding we take the liberty to also abbriviate multi path payments with MPP.
====
It is important to recognize that a node that forwards HTLCs via onions does not have to bother if the payment is a single payment or one of several multi part payments.
The only node who needs to be ready to accept multi part payments is the receiving node.
In the BOLT 11 invoice there is space for feature bits.
If ia node wishes to accept multipart payments it has to signal this by setting the corresponding feature bit (16 / 17).
If a node wishes to send a multi part payment it can also do so if the receiving node has signaled their willingess to accept such payments.
Currently there is no way for routing nodes to split the payment amount and onion into several parts or merge several incoming HTLCs into a single path.
Besides the potentially better chances to find smaller routes the sender might want to use a multipart payment because it does not have enough balance in a single payment channel.
If the channel had enough capacity this could be resolved with a circular rebalancing - which we will discuss in the next section.
However if the payment amount is bigger than the largest capacity of a channel that the sender has the sender can only pay the invoice if the recipient allows and supports multipart payments.
Similarly a recipient might not be able to receive a single payment of the requested amount and would have the interest of signaling multi part payments.
Luckily nodes will do this automatically and practially always signal the support for multi part payments if the implementation supports this feature.
The standard Lightning Network implementations which follow BOLT 1.1 all support this feature.
Multipart payments will almost always be more expensive than a single payment.
You will remember that the fees that routing nodes charge consist of a fee rate and of a base fee.
The total fee rate of a multipart payment stays roughly the same as a single payment.
However the base fee is added independent of the amount making multipart payments in most cases more expensive.
As the sender pays the fees the sender will not necessarily have the interest of splitting the payment in too many parts.
Thus implementations usually integrate multi part payments into the probing based approach.
For example after a single payment would not got through the node might split the amount into two payments and try a multipart payment with smaller amounts.
Those mulitpart payments could again be split down if they are not successfull along a route.
The advantages of multi part payments are quite obvious:
. bigger payment sizes
. higher success rates
On the other side we have a couple of downsides:
. Higher fees
. More HTLCs locked / more load on the network
. Potentially longer times. If only a single part gets stuck all the other HTLCs in flight have to wait locking liquidity of many nodes for a potentially longer time
. Leaks more information as the network is practically probed more heavily.
==== Rebalancing
In this chapter you have already learnt that the path finding problem on the lightning network is actually rather a problem of finding a flow - which consists of several paths.
Very early research about pathfinding in payment channel networks suggests \footnote{FIND LINK} that rebalancing channels does not change the flow properties between nodes.
With rebalancing we mean shifting liquidity from one channel to another channel for example via a circular payment.
There is also the notion of offchain / onchain swaps with swapping services.
This form of rebalancing certainly changes also the topological properties like the flow of the network.
As rebalancing via circular self payments would not change the overall amount that an arbitrary node can send to any other node people thought that rebalancing is not very useful.
However in practice a node hardly wants to find the perfect flow or multipath to be able to send the absolute maximum amount to another node.
Nodes are rather interested in quickly finding a sufficient large flow so that they can make a reasonable payment.
Research conducted by Rene Pickhardt (one of the authors of this book) indicated that circular rebalancing operations improve the overall successrate in the network for arbitrary payments.
It turns out that there is various ways how rebalancing can be used and in some form it even resembles the functionality of a multi path payment.
Thus we decided to devote a section here on basics about rebalancing and how it can be used to improve the pathfinding abilities of the network.
We made the experience that most people call their payment channel balanced if they own the same amount of bitcoin in that channel as their channel partner.
While this seems intuitive we want to show that this intuition does not seem to be the best intuition for our goals.
In order to see this let us assume the Lightning Network at some point in time looks exactly like that.
All channels split the capacity 50 - 50 dividing it into half between the channel partners.
[[rebalancing-1]]
.A part of the Lightning Network where all the channel balances are distributed 50/50.
image:images/rebalancing-1.png[]
It is quite clear that after already one single payment such a 50 - 50 state would be destroyed.
You can see this in the following graph.
[[rebalancing-2]]
.The Bob - Wei channel becomes now imbalanced
image:images/rebalancing-2.png[]
you can see that after Bob made a payment of 1 million satoshi to Wei the channel balance was shifted.
Bob now has 1.5 million satoshi on the channel and wei has 3.5 million satoshi on the channel.
The balance ratio went from 50/50 to 30/70.
The other 2 channels however styed with 50/50.
Wei decides that he wants to have a 50/50 channel with Bob.
There are 3 ways of how he can achieve this.
. He can send back 1 milion satoshi to Bob
. He can use an onchain swapping service
. He can send a circular onion
Sending back the money would be quite expensive and does not seem to be a realistic option.
Using an onchain swapping service after every payment to rebalance channels seems also problematic.
The entire idea of creating the Lightning Network was to have less on chain transaction and be able to send money between people without the necessity to do on chain transactions.
Thus there is only the last option which means that Wei could move the money from the Bob-Wei channel via the Bob-Erica channel to hhis Erica-Wei channel.
[[rebalancing-4]]
.Wei tries to rebalance the Bob-Wei channel in the unbalanced network via a circular onion of 1 mio Satoshi.
image:images/rebalancing-4.png[]
The problem in the new network can easily be seen on the next picture.
While the Bob-Wei channel now becomes 50/50 again all the other channel turned into a 30/70 split ratio.
[[rebalancing-5]]
.Rebalancing one channel produces imbalanced other channels
image:images/rebalancing-5.png[]
An interesting oversvation about this rebalancing can be made though!
After the payment and the rebalancing it looked like Bob initially had sent Money not via the Bob-Wei channel but via the path along Erica.
[[rebalancing-6]]
.Rebalancing is equivalent to having selected a different payment path to begin with.
image:images/rebalancing-6.png[]
This observation is actually quite interesting.
While the math theory tells us that rebalancing channels does not change the max flow between two nodes we see that it has changed the selected path of a payment.
Due to the onion routing and the privacy goals that are implemented in it we have a source based routing and thus assume the sender always has to select and thus find the path.
However this is not true!
When rebalancing comes into place we can use the local knowledge of the distribution of balances that nodes might have to help with selection of paths and finding a total payment path / multi path or flow.
We will explore this idea a little bit more in the upcoming section about JIT routing.
Remember in our example after Bob has paid Wei Bob had a total amount of 4 million satoshi, Wei had a total of 6 million satoshi and Erica still had 5 million satoshi as before.
Of course it would be possible to have payment channels between these three people with that distribution of funds so that everyone has 50% of the capacity on their side of the payment channel.
[[rebalancing-7]]
.50/50 balances with upteded capacities.
image:images/rebalancing-7.png[]
While the above picture shows that it is possible to have 50/50 channls after the payment this could only be achieved if the capacities would have been changed.
Changing the capacity of channels is only possible by closing and opening the channel or with the help of a technique called splicing.
The later is not widely deployed yet and would also depend on onchain transactions.
We hope that you have seen from this example a few things:
. Off-chain rebalancing does not change the fact how much money can flow from sender to receiver.
. Making payments changes how much money sender and receiver can send or receive. This is similar to the physical world where you also can only spend the cash that you have received first.
. The goal to have channels in a 50/50 state is not possible for all the nodes all the time and thus probably not a good one.
. Rebalancing in combination with payments changes the way money flew from the sender to the recipient. In particular it shifts can shift the responsability to find a path from the sender to several nodes on the network - even they don't know which path they are trying to find.
. Thus rebalancing can be a nice tool to support path finding.
With these conclusings let us look more precisely what would be good rebalancing strategies for nodes.
The main problem with Lightning network channels from a routing and pathfinding perspective is that the liquidity is not known.
From that perspective the 50/50 approach which is not achievable makes sense.
If nodes could assume that other nodes always have a certain amount of the capacity on their side they could use that fraction of the capacity to make path finding decisions.
Initially all the channel balance of newly opened channels is on one side.
Thus if there is a new node which has opened some channels and received some channels all the channels are unbalanced and routing is always only possible in one direction.
Nodes and node operators could look at the channel balance coefficient which is defined as the ratio between the balance they hold on that channel divided by the capacity of that channel.
As the balance can never be below zero and never exceed the capacity this channel balance coefficient will always be between 0 and 1.
A node can easily compute the channel balance coefficient for all its channels.
By the way in the case of the 50/50 rebalancing the coefficients would all have the value of 0.5.
Researchers demonstrated that the overall likelihood to find a path increases if nodes aim to rebalance their channels in a way that their local channel balance coefficients all take the same value.
This target value can easily be computed as the amount of total funds that a node owns on the network devided by the sum of all capacities of channels that the node maintains.
We call this target value the node balance coefficient \nu.
Nodes can check wich channels have channel balance coefficient that is bigger than \nu and which have a channel balance coeffcient that is smaller than \nu.
after identifying such channels it makes sense to make circular self payments from the channels with too mcuh liquidity to the channels with too little liquidity.
This approach has an economical drawback.
Doing a circular self payment is not for free.
The nodes along the circular path will charge routing fees which always have to be paid by the initiator of the payment.
This would be your node if you wanted to rebalance your channels.
It might be justified for you to pay those fees upfront because you might earn them back with the routing fees that you charge if you can successfully forward payments.
However you do not really know in which direction you will have to route payments later.
In the worst cast you moved liquidity from a channel which you could have used perfectly to fulfill routing requests along that edge in this direction.
Not only would you have paid routing fees for a rebalancing operation you would also have depleeted your channel more quickly and might face the need to rebalance again.
We hope that you are not discouraged at this moment.
Rebalancing is still a viable thing.
While proactive rebalancing increases the reliablity of the network it is currently economically not viable.
However you could rebalance reactively or Just in Time at the moment when necessary.
Imagine you have a an incoming HTLCs and the onion says you are supposed to forward the payment along a channel where you lack sufficient balance.
The standard case of the protocol would be to return the onion with an onion and remove the incoming HTLC.
However noone stops your node from shortly interrupting the routing process and conduct a rebalancing operation to provide yourself with sufficient liquidity on the channel in question.
This method is called JIT-Routing as it helps nodes to reactively provide themselves with enough liquidity just in time.
The just in time Routing scheme has 2 major advantages over source based routing.
. It increases the privacy of channels. If nodes that do not have sufficient liquidity return the onions an attacker can use that behavior to probe for the channel balance. However if nodes rebalance their channels they will always be able to forward the payment and protect themselves from probing attacks.
. More importantly it resembles multipart payments in which the splitting of the payment is not been decided by the sender who would not know how balances remotely are distributed but the splitting would be achieved by the routing node that knows its local topology.
Let us elaborate on the second point and take the example in which Bob was supposed to forward the onion from Alice to Wei but does have enough liquidity on the channel with Wei.
If Bob now does a cebalancing operation through Erica and is able to afterwards forward the payment along to Bob he has effectively split the payment at his node to flow along two paths.
One part flows directly to Wei and the other part takes the path over Erica to Wei.
It is obvious that splitting a payment at the node that can't forward the entire payment is much more reliable and effective than letting the sender decide how to split a payment and into which amounts.
We thus can see that with the help of JIT-Routing rebalancing and multipart payments are actually not so different concepts and ideas.
There is another way how mutlipart payments and rebalancing can be combined.
Let us recall that nodes should always aim to have similar channel balance coefficients.
So if a node wants to make a multipart payment it could split the payment in such a way that it rebalances its channels.
Meaning it would only pay from channels on which it currently has too much liquidity.
Also it would use larger parts for the channels that have way too much liquidity and smaller amount for the channels that have just a little bit too much liquidity.
The optimal amounts can easily be computed with the following formulars.
TODO: somehow describe this better without being too scientific. Tool and code can be found at: https://github.com/lightningd/plugins/pull/83
```
new_funds = sum(b) - a
# assuming all channels have capacity of 1 btc
cap = len(b)
nu = float(new_funds) / cap
ris = [1*(float(x)/1 - nu) for x in b]
real_ris = [x for x in ris if x > 0]
s = sum(real_ris)
payments = [a*x/s for x in real_ris]
```
In fact this multipath rebalancing could also be utilized in the process of JIT routing.
Instead of shifting all the funds from one channel to the destination channel a node could use a circular multipart payment.
* (proactive / reactive) Rebalancing
* Imbalance measures
* goals for rebalancing (low Gini coefficient and not 50 / 50)
* optimization problem / game theory
* JIT Routing
==== Optimizations for Multi path payments
The rebalancing goal with local channel balance coefficients could actually be integrated into multi path payments.
Thus if a node decides to send a payment along several paths it could very well use this opportunity to split the payment in a way that it improves the imbalance of its own channels.
So instead of splitting payments by 2 in a divide and conquorer strategy the node could use the following formula ...

View File

@ -181,9 +181,11 @@ Following is an alphabetically sorted list of all the GitHub contributors, inclu
* Doru Muntean (@chriton)
* Eduardo Lima III (@elima-iii)
* Emilio Norrmann (@enorrmann)
* Francisco Calderón (@grunch)
* Haoyu Lin (@HAOYUatHZ)
* Hatim Boufnichel (@boufni95)
* Imran Lorgat (@ImranLorgat)
* Julien Wendling (@trigger67)
* Kory Newton (@korynewton)
* Luigi (@gin)
* Patrick Lemke (@PatrickLemke)

View File

@ -40,7 +40,7 @@ _I (Alice) will give you (Bob) 10 golden coins if you pass them on to Wei_
While this contract is nice in the real world Alice yields the issue that Bob might just breach the contract and hope not to get caught by law enforcement.
Even if Bob got caught by law enforcement Alice faces the risk that he might be bankrupt and her 10 golden coins would be gone anyway.
Assuming these issues are magically solved it would still be unclear to from a contract point of view that Wei also has to have a contract with Gloria to deliver the coins.
Thus we improove our contract:
Thus we improve our contract:
_I (Alice) will reimburse you (Bob) with 10 golden coins if you can proof to me (for example via a receipt) that you already have delivered 10 golden coins to Wei_
@ -64,23 +64,23 @@ We call this hash the payment hash.
In reality Gloria would come up with a large random number as a secret.
This is to be really secure and prevent others from guessing it.
But let us assume that in our case Glorias secret take reads `*Glorias secret*`.
She would commit to the secret by computing the sha256 hash which reads `*70c87220dd901a004804b49e9ec2fd73283fad127cf112fefa67e6b79b8739b7*`.
You can verify this by typing `echo "Glorias secret" | sha256sum` to your linux command line.
She would commit to the secret by computing the sha256 hash which reads `*f23c83babfb0e5f001c5030cf2a06626f8a940af939c1c35bd4526e90f9759f5*`.
You can verify this by typing `echo -n "Glorias secret" | sha256sum` to your linux command line.
As Alice wants to send 10 golden coins to Gloria she is told by Gloria to use this payment hash to receive a proof of payment.
Alice now sets up a contract that reads:
_I (Alice) will reimburse you (Bob) with 12 golden coins if you can show me a valid message - we call it preimage - that hashes to `*70c87220dd901a004804b49e9ec2fd73283fad127cf112fefa67e6b79b8739b7*`. You can acquire this message by setting up a similar Contract with Wei who has to set up a similar contract with Gloria. In order to assure you that you will get reimbursed I will provide the 12 Golden coins to an trusted escrow before you set up your next contract._
_I (Alice) will reimburse you (Bob) with 12 golden coins if you can show me a valid message - we call it preimage - that hashes to `*f23c83babfb0e5f001c5030cf2a06626f8a940af939c1c35bd4526e90f9759f5*`. You can acquire this message by setting up a similar Contract with Wei who has to set up a similar contract with Gloria. In order to assure you that you will get reimbursed I will provide the 12 Golden coins to an trusted escrow before you set up your next contract._
After Bob and Alice agree to the contract and Bob receives the message from the escrow that Alice has deposited the 12 golden coins Bob negotiates a very similar contract with Wei.
Note that due to the service fees he will only forward 11 golden coins to Wei and demand from Wei who also wants to earn a fee of 1 golden coin to show proof that 10 golden coins have been delivered to Gloria.
_I (Bob) will reimburse you (Wei) with 11 golden coins if you can show me a valid message - we call it preimage - that hashes to `*70c87220dd901a004804b49e9ec2fd73283fad127cf112fefa67e6b79b8739b7*`. You can acquire this message by setting up a similar contract with Gloria. In order to assure you that you will get reimbursed I will provide the 11 Golden coins to an trusted escrow before you set up your next contract._
_I (Bob) will reimburse you (Wei) with 11 golden coins if you can show me a valid message - we call it preimage - that hashes to `*f23c83babfb0e5f001c5030cf2a06626f8a940af939c1c35bd4526e90f9759f5*`. You can acquire this message by setting up a similar contract with Gloria. In order to assure you that you will get reimbursed I will provide the 11 Golden coins to an trusted escrow before you set up your next contract._
As Wei gets message from the escrow that Bob has deposited the 10 golden coins Wei sets up a similar contract with Gloria:
_I (Wei) will reimburse you (Gloria) with 10 golden coins if you can show me a valid message - we call it preimage - that hashes to `*70c87220dd901a004804b49e9ec2fd73283fad127cf112fefa67e6b79b8739b7*`. In order to assure you that you will get reimbursed after revealing the secret I will provide the 10 Golden coins to an trusted escrow._
_I (Wei) will reimburse you (Gloria) with 10 golden coins if you can show me a valid message - we call it preimage - that hashes to `*f23c83babfb0e5f001c5030cf2a06626f8a940af939c1c35bd4526e90f9759f5*`. In order to assure you that you will get reimbursed after revealing the secret I will provide the 10 Golden coins to an trusted escrow._
As Gloria learns from the escrow that the coins where deposited she reveals the secret preimage to Wei.
Since she initially came up with the secret and committed to it in form of the payment hash she obviously is able to provide the secrete to Wei and their escrow service.
@ -167,8 +167,8 @@ cltv stands for OP_CHECKTIMELOCKVERIFY and is the OP_CODE that will be used in t
Finally in the last data field there are 1336 Bytes of data included which is an `onion routing packet`.
The format of this packet will be discussed in the last section of this chapter.
For now it is important to note that it includes encrypted routing hints and information of the payment path that can only be partially decrypted by the recipient of the onion routing packet to extract information to whom to forward the payment or to learn that one as the final recipient.
In any case the onion roting packet is always of the same size preventing the possability to guess the position of an intermediary node within a path.
In our particular case Bob will be able to decrypt the first couple bytes of the onion routing packet and learn that the payment is not to be forwored but intendet to be for him.
In any case the onion roting packet is always of the same size preventing the possibility to guess the position of an intermediary node within a path.
In our particular case Bob will be able to decrypt the first couple bytes of the onion routing packet and learn that the payment is not to be forwarded but intended to be for him.
The received information is enough for Bob to create a new commitment transaction.
This commitment transaction now has not only 2 outputs encoding the balance between Alice and Bob but a third output which encodes the hashed time locked contract.
@ -178,10 +178,10 @@ This commitment transaction now has not only 2 outputs encoding the balance betw
image:images/routing-setup-htlc-1.png[]
We can see that Bob Assumes that Alice will agree to lock 15 mBTC of her previous balance and assign it to the HTLC output.
Creating this HTLC output can be compared to giving Alices golden coins to the escrow service.
Creating this HTLC output can be compared to giving Alice golden coins to the escrow service.
In our situation the bitcoin network can enforce the HTLC as Bob and Alice have agreed upon.
Bob's Balance has not changed yet.
In Bitcoin outpus are mainly described by scripts.
In Bitcoin outputs are mainly described by scripts.
The received HTLC in Bob's commitment transaction will use the following bitcoin script to define the output:
@ -205,11 +205,11 @@ The received HTLC in Bob's commitment transaction will use the following bitcoin
We can see that there are basically three conditions to claim the output.
1. Directly if a revocation key is known. This would happen if at a later state Bob fraudulently publishes this particular commitment transaction. As a newer state could only be agreed upon if Alice has learnt Bob's half of the revocation secret she could directly claim the funds and keep them even if Bob was later able to provide a proof of payment. This is mainly described in this line `OP_DUP OP_HASH160 <RIPEMD160(SHA256(revocationpubkey))> OP_EQUAL` and can be down by using `<revocation_sig> <revocationpubkey> as a witness script.
2. If Bob has successfully delivered the payment and learnt the preimage he can spend the HTLC output with the help of the preimage and his `local_HTLC_secret`. This is to make sure that only Bob can spend this output if the commitment transaction hits the chain and not any other third party who might know the preimage because they had been included in the routing process. Claiming this output requires an HTLC-success transaction whih we describe later.
3. Finally Alice can use her `remote_HTLC_secret` to spend the HTLC output after the timeoput of `cltv_expiry` was passed by using the following witness script `<remoteHTLCsig> 0`
2. If Bob has successfully delivered the payment and learnt the preimage he can spend the HTLC output with the help of the preimage and his `local_HTLC_secret`. This is to make sure that only Bob can spend this output if the commitment transaction hits the chain and not any other third party who might know the preimage because they had been included in the routing process. Claiming this output requires an HTLC-success transaction which we describe later.
3. Finally Alice can use her `remote_HTLC_secret` to spend the HTLC output after the timeout of `cltv_expiry` was passed by using the following witness script `<remoteHTLCsig> 0`
As the commitment transaction spends the 2 out of 2 multisig fundin transaction Bob needs two signatures after he constructed this commitment transaction.
He can obviosly compute his own signature but he needs also the signature from Alice.
As the commitment transaction spends the 2 out of 2 multisig funding transaction Bob needs two signatures after he constructed this commitment transaction.
He can obviously compute his own signature but he needs also the signature from Alice.
As Alice initiated the payment and wanted the HTLC to be set up she will be reluctant to provide a signature.
@ -234,10 +234,10 @@ At this time he would be able to publish either the old one or the new one witho
However this is save for Alice as Bob has less money in this old state and is economically not incentivised to publish the old commitment transaction.
Alice on the other side has no problem if Bob publishes the new commitment transaction as she wanted to send him money.
If Bob can provide the preimage he is by their agreement and expectation entitled to claim the HTLC output.
Should Bob decide to sabotatge to future steps of the protocol Alice can either publish her commitment transaction without Bob being able to punish her.
Should Bob decide to sabotage to future steps of the protocol Alice can either publish her commitment transaction without Bob being able to punish her.
He will just not have received the funds from Alice.
This is important!
Despitethe fact that Bob has a new commitment transaction with two valid signatures and an HTLC output inside he cannot seen his HTLC as being set up successfully.
Despite the fact that Bob has a new commitment transaction with two valid signatures and an HTLC output inside he cannot seen his HTLC as being set up successfully.
He first needs to have Alice invalidate her old state.
That is why - in the case that he is not the final recipient of the funds - he should not forward the HTLC yet by setting up a new HTLC on the next channel with Wei.
Alice will not invalidate her commitment transaction yet as she has to first get her new commitment transaction and she wants Bob to invalidate his old commitment transaction which he can safely do at this time.
@ -252,13 +252,13 @@ The `revoke_and_ack` Lightning message contains three data fields.
* [`point`:`next_per_commitment_point`]
While it is really simple and straight forward it is very crucial.
Bob shares the the `per_commitment_secret` of the old commitment transaction which serves as the revocation key and would allow Alice in future to penalize Bob if he publishes the old commitment transactio without the HTLC output.
Bob shares the the `per_commitment_secret` of the old commitment transaction which serves as the revocation key and would allow Alice in future to penalize Bob if he publishes the old commitment transaction without the HTLC output.
As in a future Alice and Bob might want to negotiate additional commitment transactions he already shares back the `next_per_commitment_point` that he will use in his next commitment transaction.
Alice checks that the `per_commitment_secret` produces the last `per_commitment_point` and constructs her new commitment transaction with the HTLC output.
Alice's version of the HTLC output is slightly different to the one that Bob had.
The reason is the asymmetrie of the pentalty based payment channel construction protocol.
Alice is offering in her commitment transaction an HTLC to the `remote` partner of the channl while Bob as accepting and offered HTLC to himself the `local` partner of the channel.
The reason is the asymmetries of the penalty based payment channel construction protocol.
Alice is offering in her commitment transaction an HTLC to the `remote` partner of the channel while Bob as accepting and offered HTLC to himself the `local` partner of the channel.
Thus the Bitcoin script is adopted slightly.
It is a very good exercise to go through both scripts and see where they differ.
You could also try to use Bob's HTLC output script to come up with Alice's and vice versa and check your result with the following script.
@ -279,7 +279,7 @@ You could also try to use Bob's HTLC output script to come up with Alice's and v
OP_ENDIF
OP_ENDIF
Bob can redeem the HTLC with `<remoteHTLCsig> <payment_preimage>` as the whitness script and in case the commitment tranaction is revoked but published by alice Bob can trigger the penality by spending this output immediately with the following witness script `<revocation_sig> <revocationpubkey>`.
Bob can redeem the HTLC with `<remoteHTLCsig> <payment_preimage>` as the witness script and in case the commitment transaction is revoked but published by Alice, Bob can trigger the penality by spending this output immediately with the following witness script `<revocation_sig> <revocationpubkey>`.
[[routing-setup-htlc-4]]
.Bob knows how Alice's commitment transaction will look like and sends over the necessary signatures.
@ -287,8 +287,8 @@ image:images/routing-setup-htlc-4.png[]
This process is completely symmetrical to the one where Alice sent her signatures for Bob's new commitment transaction.
Now Alice is the one having two valid commitment transactions.
Technically she can still abort the payment by publishing her old commitment transaction to the bitcon network.
Noone would lose anything as Bob knows that the contract is still being set up and not fully set up yet.
Technically she can still abort the payment by publishing her old commitment transaction to the bitcoin network.
No one would lose anything as Bob knows that the contract is still being set up and not fully set up yet.
This is a little bit different than how the situation would look like in a real world scenario.
Recall Alice and Bob both have set up a new commitment transaction and have exchanged signatures.
In the real world one would argue that this contract is now valid.
@ -298,7 +298,7 @@ In the real world one would argue that this contract is now valid.
image:images/routing-setup-htlc-5.png[]
Now Bob and Alice both have a new commitment transaction with and additional HTLC output and we have achieved a major step towards updating a payment channel.
The new Balance of Alice and Bob does not reflect yet that Alice has succesfully send 15 mBTC to Bob.
The new Balance of Alice and Bob does not reflect yet that Alice has successfully send 15 mBTC to Bob.
However the hashed time locked contracts are now set up in a way that secure settlement in exchange for the proof of payment will be possible.
This yields another round of communication with Lightning messages and setting up additional commitment transactions which in case of good cooperation remove the outstanding HTLCs.
Interestingly enough the `commitment_signed` and `revoke_and_ack` mechanism that we described to add an HTLC can be reused to update the commitment transaction.
@ -313,13 +313,13 @@ This message has the type 130 and only 3 data fields:
As other messages Bob uses the `channel_id` field to indicates for which channel he returns the preimage.
The htlc that is being removed is identified by the same `id` that was used to set up the HTLC in the commitment transaction initially.
You might argue that Alice would not need to know the id of the HTLC for which Bob releases the preimage as the preimage and payment hash could be unique.
However with this design the protocoll supports that a payment channel has several htlcs with the same preimage but only settles one.
One could argue that this does not make too much sense and it is good to be criticle but this is how the protcol is designed and what it supports.
However with this design the protocol supports that a payment channel has several htlcs with the same preimage but only settles one.
One could argue that this does not make too much sense and it is good to be critical but this is how the protocol is designed and what it supports.
Finally in the last field Bob provides the `payment_preimage` which Alice can check hashes to the payment hash.
[WARNING]
====
When designing, implementing or studying a protocol one should ask: Is it safe to this or that in this moment of the protocol and can this be abused. We discussed for example the messages that where necessary for an HTLC to become valid. We pointed out that Bob should not see the received HTLC as valid even though he already has a new commitment transaction with signatures and invalidated his old commitment transaction before Alice also revoked her old commitment transaction. We also saw that noone is able to mess with the protocol of setting up a commitment transaction as in the worst case the protocol could be aborted and any dispute could be resolved by the Bitcoin Netwok. In the same way we should ask ourselves is it safe for Bob to just send out and release the preimage even though neither he nor alice have created the new pair of commitment transactions in which the HTLCs are removed. It is important to take a short break and ask yourself if Bob will in any case be able to claim the funds from the HTLC if the preimage is correct?
When designing, implementing or studying a protocol one should ask: Is it safe to this or that in this moment of the protocol and can this be abused. We discussed for example the messages that where necessary for an HTLC to become valid. We pointed out that Bob should not see the received HTLC as valid even though he already has a new commitment transaction with signatures and invalidated his old commitment transaction before Alice also revoked her old commitment transaction. We also saw that no one is able to mess with the protocol of setting up a commitment transaction as in the worst case the protocol could be aborted and any dispute could be resolved by the Bitcoin Network. In the same way we should ask ourselves is it safe for Bob to just send out and release the preimage even though neither he nor alice have created the new pair of commitment transactions in which the HTLCs are removed. It is important to take a short break and ask yourself if Bob will in any case be able to claim the funds from the HTLC if the preimage is correct?
====
It is safe for Bob to tell Alice the preimage.
@ -337,7 +337,7 @@ Isn't it remarkable that even though the process of exchanging funds for an prei
=== Source based Onion Routing
So far you have learnt that payment channels can be connected to a network which can be utilized to send payment from one participant to another one through a path of payment channels.
You have seen that with the use of HTLCs the intemediary nodes along the path are not able to steal any funds that they are supposed to forward and you have also learnt how a node can set up and settle an HTLC.
You have seen that with the use of HTLCs the intermediary nodes along the path are not able to steal any funds that they are supposed to forward and you have also learnt how a node can set up and settle an HTLC.
While this is all great it leaves a couple of questions open:
- Who chooses the path?
@ -347,58 +347,58 @@ While this is all great it leaves a couple of questions open:
The short answer to the first questions is that only the sender decides which path to choose.
Despite the fact that the Lightning Network is currently running the second question is still not answered in an optimal way and became a serious research topic.
For now we will only say that in the standard case the sender more or less randomly selects and tries paths of channels until it is possible to send the amount along that selected path.
With multi path payments the sender can split the amount and use the same strategy with multiple pahts.
More deails will be discuss in the advanced section about path finding.
With multi path payments the sender can split the amount and use the same strategy with multiple paths.
More details will be discuss in the advanced section about path finding.
There we explore and explain the current approach which seems to work good enough most of the time.
You will also learn about potential improvements that are currently being researched in that chapter.
The short answer to the third question is that no other node in the network learns about this path.
Nodes along the path only learn on which channel they received a payment and on which channel they are supposed to forward it.
Neither do they know whether the peer on the receiving channel initiated the payment nor do they know whether the peer on the outgoing channel is the final recipient of the payment.
We exepect you to be surprised that it is actually possible to create such an algorithm with modern cryptography.
We expect you to be surprised that it is actually possible to create such an algorithm with modern cryptography.
This is why we will now devote quite some space to write and discuss about source based onion routing.
This technique is fundamentally different to the best effort routing approach that is implemented on the internet protocol.
Best effort routing is know to have poor privacy protection of the transfered data and needs end to end encryption on the upper layers to be secure.
As many upper layer protocols did not include end to end encryption we learnt from the Snoweden revelations that spying agencies have been massivily collecting data that was transfered over the internet together with the meta data like IP addresses of senders and recipients.
As many upper layer protocols did not include end to end encryption we learnt from the Snoweden revelations that spying agencies have been massively collecting data that was transfered over the internet together with the meta data like IP addresses of senders and recipients.
To get rid of these problems the Lightning Network utilizes a sourced base onion routing based on the SPHINX Mix format.
The SHINX mix format was originally designed to allow email remails to offer the possability to send an answer without creating a security threat of the remailer service being able to know who was communicating with whom.
The SHINX mix format was originally designed to allow email remails to offer the possibility to send an answer without creating a security threat of the remailer service being able to know who was communicating with whom.
In that sense and very roughly speaking the SPHINX Mix format can be compared with the onion routing that is well known from the TOR network.
[NOTE]
====
While the Lightning Network also uses an onion routing scheme it is actually very differnt to the onion routing scheme that is used in the TOR network.
The biggest difference is that TOR is being used for arbitrary data to be exchanged between two participants where on the Lightning Network the main usecase is to pay people and transfer data that encodes monitary value.
On the Lightning Network there is no analogy to the exit nodes of the Tor Network which on the TOR network produce a security risk. Lightning user should still not get theimpression that their data and information is perfectly secure. Knowing the announced fee rates and CLTV deltas a node might be able to guess the destination of an onion.
While the Lightning Network also uses an onion routing scheme it is actually very different to the onion routing scheme that is used in the TOR network.
The biggest difference is that TOR is being used for arbitrary data to be exchanged between two participants where on the Lightning Network the main use case is to pay people and transfer data that encodes monetary value.
On the Lightning Network there is no analogy to the exit nodes of the Tor Network which on the TOR network produce a security risk. Lightning user should still not get the impression that their data and information is perfectly secure. Knowing the announced fee rates and CLTV deltas a node might be able to guess the destination of an onion.
In TOR the security can be compromised if all randomly chosen TOR hops are acting together. In Lightning the payment hash identifies a payment and thus not every node along the path needs to be compromised in order to attack the security.
On the TOR network nodes are basically connected via a full graph as every node could create an encrypted connection with every other node on top of the Internet Protocol almost instantaneously and at no cost. On the Lightning Network payments can only flow along existing payment channels. Removing and adding of those channels is a slow and expensive process as it requires onchain bitcoin transactions.
On the Lightnign Network nodes might not be able to forwad a payment package because they do not own enough funds on their side of the payment channel. On the other hand there are hardly any plausible reasons other then its wish to act malliciously why a TOR node might not be able to forward an onion.
Last but not least the Lightning Network can actuly run on TOR.
On the Lightning Network nodes might not be able to forward a payment package because they do not own enough funds on their side of the payment channel. On the other hand there are hardly any plausible reasons other then its wish to act maliciously why a TOR node might not be able to forward an onion.
Last but not least the Lightning Network can actually run on TOR.
This means that all connections of a node with its peers and the resulting communication will by obfuscated once more through the TOR network.
====
Lets stick to our example in which Alice still wants to tip Gloria and has decided to use the path via Bob and Wei.
We note that there might have been alternative paths from Alice to Gloria but for now we will just assume it is this path that Alice has decided to use.
You have already learnt that Alice needs to set up an HTLC with Bob via and `update_add_htlc` message.
As discussed the `update_add_htlc` message containes a data field of 1366 Bytes in length that is the onion package.
This onion cointains all the information about the path that Alice intends to use to send the payment to Gloria.
As discussed the `update_add_htlc` message contains a data field of 1366 Bytes in length that is the onion package.
This onion contains all the information about the path that Alice intends to use to send the payment to Gloria.
However Bob who receives the onion cannot read all the information about the path as most of the onion is hidden from him through a sequence of encryptions.
The name onion comes from the analogy to an onion that consits of several layers. In our case every layer corresponds to one round of encryption.
The name onion comes from the analogy to an onion that consists of several layers. In our case every layer corresponds to one round of encryption.
Each round of encryption uses different encryption keys.
They are chosen by Alice in a way that only the rightful recipient of an onion can peel of (decrypt) the top layer of the onion.
For example after Bob received the onion from Alice he will be able to decrypt the first layer and he will only see the information that he is supposed to forward the onion to Wai by setting up an HTLC with Wei.
For example after Bob received the onion from Alice he will be able to decrypt the first layer and he will only see the information that he is supposed to forward the onion to Wei by setting up an HTLC with Wei.
The HTLC with Wei should use the same Payment Hash as the receiving HTLC from Alice.
The amount of the forwarded HTLC was specified in Bob decrypted layer of the onion.
It will be slightly smaller than the imount of his incoming HTLC from Alice.
It will be slightly smaller than the amount of his incoming HTLC from Alice.
The difference of these two amounts has to be at least as big as to cover the routing fees that Bob's node announced earlier on the gossip protocol.
In order to set up the HTLC Bob will modify the onion a little bit.
He removes the information that he could read from it and passes it along to Wei.
Wei in turn is only able to see that he is supposed to forwad the package to Gloria.
Wai knows he recieved the onion from Bob but has no clue that it was actually Alice who initiated the onion in the first place.
Wei in turn is only able to see that he is supposed to forward the package to Gloria.
Wei knows he received the onion from Bob but has no clue that it was actually Alice who initiated the onion in the first place.
In this way every participant is only able to peel of one layer of the onion by decrypting it.
Each participant will only learn the information it has to learn to fullfil the routing request.
Each participant will only learn the information it has to learn to fulfill the routing request.
For example Bob will only know that Alice offered him an HTLC and sent him an onion and that he is supposed to offer an HTLC to Wei and forward a slightly modified onion.
Bob does not know if Alice is the originator of this payment as she could also just have forwarded the payment to him.
Due to the layered encryption he cannot see the inside of Wei's, and Gloria's layer.
@ -412,17 +412,17 @@ Let us now look at the construction of the Onion that Alice has to follow and at
The onions are a data structure that at every hop consists of four parts:
1. The version byte
2. The header consisting of a public key that can be used by the recipient to produce the shared secret for decrypting the outer layer and to derieve the public key that has to be put in the header of the modified onion for the next recipient.
2. The header consisting of a public key that can be used by the recipient to produce the shared secret for decrypting the outer layer and to derive the public key that has to be put in the header of the modified onion for the next recipient.
3. The payload
4. an authentication via an HMAC.
For now we will ignore how the public keys are derived and exchanged and focus on the payload of the onion.
Only the payload is actually encrypted and will be peeled of layer by layer.
The payload consits of a sequence of a sequence of per hop data.
The payload consists of a sequence of a sequence of per hop data.
This data can come in two formats the legacy one and the Type Length Value (TLV) Format.
While the TLV format offers more flexability in both cases the routing information that is encoded into the onion is the same for every but the last hop.
While the TLV format offers more flexibility in both cases the routing information that is encoded into the onion is the same for every but the last hop.
On the last hop the TLV information departs from the legacy information as it allows to include a preimage.
This is nice as it allow a payer to initiate a payment without the neccessity to ask the payer for an invoice and payment hash first.
This is nice as it allow a payer to initiate a payment without the necessity to ask the payer for an invoice and payment hash first.
We will this feature called key send in a different chapter.
A node needs three pieces of information to forward the package:
@ -444,12 +444,12 @@ On the incoming HTLCs David should have seen that exact amount.
Usually this amount is intended to say how many satoshis should be forwarded.
Since the short channel id was set to zero in this particular case it is interpreted as the payment amount.
Finally the CLTV delta which David should use to forward the payment is also set to zero as David is the final hop.
These data fields consit of 20 Bytes.
These data fields consist of 20 Bytes.
The Lightning Network protocol actually allows to store 65 Bytes of data the Onion for every hope.
- 1 Byte Realm which signals nodes how to decode the following 32 Bytes.
- 32 Byte for routing hints (20 of which we have already used).
- 32 Byte of a Hasched Message Authentication code.
- 32 Byte of a Hashed Message Authentication code.
Since the additional 12 Byte of data for the routing hints were not needed at this time they are set to zero.
In the next diagram we can see how the per hop payload for David looks like.
@ -460,7 +460,7 @@ image:images/routing-onion-2.png[]
On important feature to protect the privacy is to make sure that onions are always of equal length independ of their position along the payment path.
Thus onions are always expected to contain 20 entries of 65 Bytes with per hop data.
As David is the final recipient there is only reasonable data for 65 Bytes ofth per hop data.
As David is the final recipient there is only reasonable data for 65 Bytes of the per hop data.
This is not a problem as the other 19 fields are filled with junk data.
You could also see this in the previous diagram.
@ -481,13 +481,13 @@ However others cannot derive the same shared secrete as they neither know Alice'
[NOTE]
====
Let `(d,D)` be the secret and Public key of David and let G be the generator point of the elliptic curve so that `D = d*G`.
similarily let `(ek_d, EPK_D)` the ephemeral keys that Alice has generated for David such that the Publikc ephemeral Key `EPK_D = ek_d*G`.
Similarly let `(ek_d, EPK_D)` the ephemeral keys that Alice has generated for David such that the Public ephemeral Key `EPK_D = ek_d*G`.
Alice computed the shared secret as ss_`d = ek_d*D`.
Using the definition of public keys this is the same as `ek_d*(d*G)=(ek_d*d)*G`.
Since multiplication with the generator point is a group homomorphism we can apply the law of associativity.
And because the secretes are just numbers modulo some prime we can change the order of the multiplication resulting in `ss_d = (d*ek_d)*G`.
With the same argument as before we apply the law of associativity and apply the definition of public keys resulting in `(d*ek_d)*G = d*(ek_d*G) = d*EPK_D`.
We just saw why `ek_d*D = d*EPK_D = ss_d` and why Alice and Davide will be able to derive the same shared secrete if Alice puts the ephmeral pubilic key inside the onion.
We just saw why `ek_d*D = d*EPK_D = ss_d` and why Alice and Davide will be able to derive the same shared secrete if Alice puts the ephemeral public key inside the onion.
====
After the encrypted Onion for David is created Alice will create the next outer layer by creating the onion for Wei.
@ -511,7 +511,7 @@ image:images/routing-onion-4.png[]
We emphasize that Wei has no chance to decrypt the inner part of the onion.
However the information for Wei should also be protected from others.
Thus Alice conducts another ECDH.
This time with Wei's public key and and emphemeral keypair that she has generated particularly for Wei.
This time with Wei's public key and and ephemeral keypair that she has generated particularly for Wei.
She uses the shared secret to encrypt the onion payload.
She would be able to construct the entire onion for Wei - which actually Bob does while he forwards the onion.
The Onion that Wei would receive can be seen in the following diagram:
@ -520,29 +520,29 @@ The Onion that Wei would receive can be seen in the following diagram:
.`per_hop` payload of Glorias onion and the encrypted
image:images/routing-onion-5.png[]
Note that in the entire onion there will be Wei's empheral public key.
Note that in the entire onion there will be Wei's ephemeral public key.
David ephemeral public key is not stored anywhere in the onion.
Neither in the header, nor in the payload data.
However we have seen that David needed to have this key in the header of the Onion that he received.
Luckily the ephemeral keys that Alice used for the ECDH with david can be derived from the ephmeral key that she used for Wei.
Thus after Wei decrypts his layer he can use the shared secrete and his emphermal public key to derrive the Empheral public key that David is supposed to use and store it in the header of the Onion that he forwards to David.
The exact progress to generate the empheral keys for every hope will be explained at the very end of the chapter.
Luckily the ephemeral keys that Alice used for the ECDH with David can be derived from the ephemeral key that she used for Wei.
Thus after Wei decrypts his layer he can use the shared secrete and his ephemeral public key to derive the ephemeral public key that David is supposed to use and store it in the header of the Onion that he forwards to David.
The exact progress to generate the ephemeral keys for every hope will be explained at the very end of the chapter.
Similarly it is important to recognize that Alice removed data from the end of Davids onion payload to create space for the per hop data in Wei's onion.
Thus when Wei has received his onion and removed his routing hints and per hopd data the onion would be to short and he somehow needs to be able to append the 65 Bytes of filled junk data in a way that the HMACs will still be valid.
This process is of filler generation as well as the process of derriving the emphemeral keys is described in the end of this chapter.
Thus when Wei has received his onion and removed his routing hints and per hop data the onion would be to short and he somehow needs to be able to append the 65 Bytes of filled junk data in a way that the HMACs will still be valid.
This process is of filler generation as well as the process of deriving the ephemeral keys is described in the end of this chapter.
What is important to know is that every hope can derive the Ephemeral Public key that is necessary for the next hop and that the onions save space by always storing only one ephemeral key instead of all the keys for all the hops.
Finally after Alice has computed the encrypted version for Wei she will use the exact same process to compute the encrypted version for Bob.
For Bobs onion she actually computes the header and provides the emphemeral public key herself.
For Bobs onion she actually computes the header and provides the ephemeral public key herself.
Note how Wei was still supposed to forward 3000 satoshis but How Bob was supposed to forward a different amount.
The difference is the routing fee for Wai.
Bob on the other hand will only forward the onion if the difference between the mount to forward and the HTLC that Alice sets up while transfering the Onion to him is large enough to cover for the fees that he would like to earn.
The difference is the routing fee for Wei.
Bob on the other hand will only forward the onion if the difference between the mount to forward and the HTLC that Alice sets up while transferring the Onion to him is large enough to cover for the fees that he would like to earn.
[NOTE]
====
We have not discussed the exact cryptographic algorithms and schemes that are being used to compute the cypthertext from the plain text.
Also we have not discussed how the HMACs are being computed at every step and how everything fits together while the Onions are always being trucated and modified on the outer layer.
We have not discussed the exact cryptographic algorithms and schemes that are being used to compute the ciphertext from the plain text.
Also we have not discussed how the HMACs are being computed at every step and how everything fits together while the Onions are always being truncated and modified on the outer layer.
If everything until here made perfect sense to you and you want to learn about those details we believe that you have all the necessary tools at hand to read BOLT 04 which is why we decided not to include all those technical details here in the book.
BOLT 04 is the open source specification of the onion routing scheme that is being used on the Lightning Network and a perfect resource for the missing details.
====
@ -551,15 +551,15 @@ TODO: everything from here on will most likely change and could even be redundan
Onions are being constructed from the inside to the outside.
As the inside of the onion is decrypted last it has to correspond to the recipient which in our case is Gloria.
As every layer of the Onion is encrypted by Alice in such a way that only the respective recpient can decrypt their layer Alice needs to come up with a sequence of encryption keys that she will use for each and every hop.
The main concept that is being used is the shared secret computation via an elliptic Courve Diffie Helmann Key exchange (ECDH) between Alice and each of the hops.
As every layer of the Onion is encrypted by Alice in such a way that only the respective recipient can decrypt their layer Alice needs to come up with a sequence of encryption keys that she will use for each and every hop.
The main concept that is being used is the shared secret computation via an elliptic Curve Diffie Hellmann Key exchange (ECDH) between Alice and each of the hops.
However for the recipients to be able to to compute their shared secrete they have to know a public key which they can use.
If Alice used the same private key for the computation of each of the shared secrets Alice would have to send the same public key with the onion.
the different payments could be linked together by an attacker that is why
Every layer of the onion has 32 Bytes of `per_hop` data.
This data is split into 4 data filds
This data is split into 4 data fields
- The 8 Byte `short_channel_id` indicates on which channel the onion should be forwarded next
- The 8 Bytes `amt_to_forward` is a 64 Bit unsigned integer that encodes an amount in millisatoshi and indicates the amount that is supposed to be forwarded
@ -576,20 +576,20 @@ This data is split into 4 data filds
image:images/routing-onion-6.png[]
Interestingly enough Alice can construct the onion with different encryption keys for Bob, Wei and Gloria without the necessity to estable a peer connection with them.
Interestingly enough Alice can construct the onion with different encryption keys for Bob, Wei and Gloria without the necessity to establish a peer connection with them.
She only needs a public key from each participant which is the public `node_id` of the lightning node and known to Alice.
As other nodes she has learnt about the existance of public payment cannels and the public `node_id` of other participants via the gossip protocol which we described in its own chapter.
In order to have a different encryption key for every layer Alice produces a shared secrete with each hop using the public `node_id` of each node and conduct an Elliptic Curve Diffi Hellmann Key exchange (ECDH).
As other nodes she has learnt about the existence of public payment channels and the public `node_id` of other participants via the gossip protocol which we described in its own chapter.
In order to have a different encryption key for every layer Alice produces a shared secrete with each hop using the public `node_id` of each node and conduct an Elliptic Curve Diffie Hellmann Key exchange (ECDH).
She starts by generating a temporary session key.
This key will also be called the ephemeral key.
This private key multiplied with the generator Point of the Elliptic curve that is being used in Bitcoin produces a public key.
This happens in the same way how the nodes public key is generated from the secrete private key of the node.
Alice could use this session keys to conduct the diffi hellmann key exchange if she would send the public key with the onion.
However she wishes to use a different session key to conduct the diffie Hellmann key exchange with each of the nodes along the path.
Alice could use this session keys to conduct the Diffie Hellmann key exchange if she would send the public key with the onion.
However she wishes to use a different session key to conduct the Diffie Hellmann key exchange with each of the nodes along the path.
**TODO**: WHY?!
Yet she does not want to add a public key (which consumes quite some space) into every layer of the onion.
Luckily there is a nice deterministic way in which she can derive different sessions keys for every hop and execute the Diffi Hellmann and allow the hops to use their shared secrete to derieve the next session public key.
Luckily there is a nice deterministic way in which she can derive different sessions keys for every hop and execute the Diffie Hellmann and allow the hops to use their shared secrete to derive the next session public key.
Lets explore this in detail with the following example:
@ -608,13 +608,13 @@ However with that information nodes would know that Alice was the originator of
In the first part of the routing chapter you have learnt that payments securely flow through the network via a path of HTLCs.
You saw how a single HTLC is negotiated between two peer and added to the commitment transaction of each peer.
In the second part you have seen how the necessary information for setting up HTLCs along a path of hops are being transfered via onions from the source to the sender.
A mechanism that protocets the privacy of payer and payee.
A mechanism that protects the privacy of payer and payee.
However there are quite some challenges and things that can go not as expected.
This is why we we want to discuss how errors are being handled and what users and developers should take into consideration.
Most importantnly it is absolutely necessary that you anderstand that once your node sent out an onion on your behalf (most likely because you wanted to pay someone) Everything that happens to the onion is now out of your control.
Most importantly it is absolutely necessary that you understand that once your node sent out an onion on your behalf (most likely because you wanted to pay someone) Everything that happens to the onion is now out of your control.
* You cannot force nodes to forward the onion immediatly.
* You cannot force nodes to forward the onion immediately.
* You cannot force nodes to send back an error if they cannot forward the onion because of missing liquidity or other reasons.
* You cannot be sure that the recipient has the preimage to the payment hash or releases it as soon as the HTLCs of the correct amount arrived.
@ -627,7 +627,7 @@ While our user experience is that most payments find a path and settle in far le
[NOTE]
====
There are ideas out that might solve this issue to some degree by allowing the payer to abort a payment. You can find more about that under the terms `cancable payments` or `stuckless payments`. However the proposals that exist only reverse the problem as now the sender can misbehave and the recipient looses control. Another solution is to use many paths in a multipath payment and include some reduncancy and ignore the problem that a path takes longer to complete.
There are ideas out that might solve this issue to some degree by allowing the payer to abort a payment. You can find more about that under the terms `cancelable payments` or `stuckless payments`. However the proposals that exist only reverse the problem as now the sender can misbehave and the recipient looses control. Another solution is to use many paths in a multipath payment and include some redundancy and ignore the problem that a path takes longer to complete.
====
Despite these principle problems there are plausible situations in which the routing process fails and in which honest nodes can and should react.
@ -641,35 +641,35 @@ Some - but not all - of the reasons for errors could be:
* The recipient might not have issued an invoice and does not know the payment details.
* The amount of the final HTLC is too low and the recipient does not want to release the preimage.
If errors like those occure a node should send back a reply onion.
If errors like those occur a node should send back a reply onion.
The reply onion will be encrypted at each hop with the same shared secrets that have been used to construct the onion or decrypt a layer.
These shared keys are all known to the originator of the payment.
The onion innermost onion contains the error message and an HMAC for the error message.
The process makes sure that the sender of the onion and recipient of the reply can be sure that the error really originated from the node that the error messages says.
Another important step in the process of handeling errors is to abort the routing process.
Another important step in the process of handling errors is to abort the routing process.
We discussed that the sender of a payment cannot just remove the HTLC on the channel along which the sender sent the payment.
Recall for example the situation in which Alice sent and onion to Bob who set up an HTLC with Wei.
If Alices wanted to remove the HTLC with Bob this would put a financiel risk on Bob.
He fears that his HTLC with Wei still might be fullfilled meaning that he could not claim the reimbursement from Alice.
If Alice wanted to remove the HTLC with Bob this would put a financial risk on Bob.
He fears that his HTLC with Wei still might be fulfilled meaning that he could not claim the reimbursement from Alice.
Thus Bob would never agree to remove the HTLC with Alice unless he already has removed his HTLC with Wei.
If however the HTLC between Alice and Bob are set up and the HTLC between Bob and Wei are set up but Wei encounters problems with forwarding the onion it is perfectly Wei has more options than Alice.
While sending back the error Onion to Bob Wei could ask him to remove the HTLC.
Bob has no risk in removing the HTLC with Wei and Wei also has no risk as there is no downstream HTLC.
Removing an HTLC is happening very similar to adding HTLCs.
Due to the just presented argument only peers who have accepted an offered HTLC can initate the removal of HTLCs.
Due to the just presented argument only peers who have accepted an offered HTLC can initiate the removal of HTLCs.
In the case of errors peers signals that they wish to remove the HTLC by sending an `update_fail_htlc` or `update_fail_malformed_htlc` message.
These messages contain the id of an HTLC that should be removed in the next version of the commit transaction.
In the same handshake like process that was used to exchange `commitment_signed` and `revoke_and_ack` messages the new state and thus pair of commitment signatures has to be negotiated and agreed upon.
This also means while the balance of a channel thas was involved in a failed routing process will not have changed at the end it will have negotiated two new commitment transactions.
This also means while the balance of a channel that was involved in a failed routing process will not have changed at the end it will have negotiated two new commitment transactions.
Despite having the same balance it must not got back to the previous commitment transaction which did not include the HTLC as this commitment transaction was revoked.
If it was used to force close the channel the channel partner would have the ability to create a pentality transaction and get all the funds.
If it was used to force close the channel the channel partner would have the ability to create a penalty transaction and get all the funds.
==== Setteling HTLCs
==== Settling HTLCs
In the last section you you understood the error cases that can happen with onion routing via the chain of HTLCs.
You have learnt how HTLCs are removed if there is an error.
Of course HTLCs also need to be removed and the balance needs to be updated if the chain of HTLCs was successfully set up to the destination and the preimage is being released.
Not surprisingly this process is initiated with anther lightning message called `update_fulfill_htlc`.
You will remember that HTLCs are set up and supposed to be removed with a new balance for the recipient in exachange for a secrete `preimage`.
You will remember that HTLCs are set up and supposed to be removed with a new balance for the recipient in exchange for a secrete `preimage`.
Recalling the complex protocol with `commitment_signed` and `revoke_and_ack` messages you might wonder how to make this exchange `preimage` for new state atomic.
The cool thing is it doesn't have to be.
Once a channel partner with an accepted incoming HTLC knows the preimage can savely just pass it to the channel partner.
@ -677,28 +677,28 @@ That is why the `update_fulfill_htlc` message contains only the `channel_id` the
You might wonder that channel partner could now refuse to sign a new channel state by sending `commitment_siged` and `revoke_and_ack` messages.
This is not a problem though.
In that case the recipient of the offered HTLC can just go on chain by force closing the channel.
Once that has happend the preimage can be used to claim the HTLC output.
Once that has happened the preimage can be used to claim the HTLC output.
==== Some Considerations for routing nodes
Accepting and HTLC removes funds from a peer that the peer cannot utilize unless the HTLC is removed due to success or failure.
Similarly forwarding an HTLC binds some funds from your nodes payment channel until the HTLC is being removed again.
As we explained in the very beginning of the chapter engaging into the forwarding process of HTLCs does neither yield a direct risk to loose funds nor does it gain the chance to gain funds.
However the funds in jepordy could be locked for some time.
However the funds in jeopardy could be locked for some time.
In the worst case the routing process needs to be resolved on chain as the payment channel was forced close due to some other circumstances.
In that case outstanding HTLCs produce additional onchain food print and costs.
Thus there are two small economic risks envolved with the participation in the routing process.
Thus there are two small economic risks involved with the participation in the routing process.
. Higher onchain fees in case of forced channel closes due to the higher footprint of HTLCs
. Opportunity costs of locked funds. While the HTLC is active the funds cannot be used otherwise.
In economics and financial mathmatics the idea to pay another person that takes a risk is widly spread and seems reasonable.
Owners of routing nodes might want to monitor the routing behavior and opportinuties and compare them to the onchain costs and the opportunity costs in order to compute their own routing fees that they wish to charge to accept and forward HTLCs.
In economics and financial mathematics the idea to pay another person that takes a risk is widely spread and seems reasonable.
Owners of routing nodes might want to monitor the routing behavior and opportunities and compare them to the onchain costs and the opportunity costs in order to compute their own routing fees that they wish to charge to accept and forward HTLCs.
Also one should notice that HTLCs are outputs in the commitment transaction.
Lightning network protocol allows users to pay a single satoshi.
However it is impossible to set up HTLCs for this amount.
The reason is that the corresponding outputs in the commitment transaction would be below the dust limit.
Such cases are solved in practise with the following trick:
Such cases are solved in practice with the following trick:
Instead of setting up an HTLC the amount is taken from the output of the sender but not added to the output of the recipient.
Thus the HTLCs which are below the dust limit can understood as additional fees in the commitment transaction.
Most Lightning Nodes support the configuration of minimum accepted HTLC values.