mirror of
https://github.com/lnbook/lnbook
synced 2024-11-15 00:15:05 +00:00
1167 lines
65 KiB
Plaintext
1167 lines
65 KiB
Plaintext
[[set_up_a_lightning_node]]
|
|
== Lightning Node Software
|
|
|
|
As we have seen in previous chapters, a Lightning node is a computer system that participates in the Lightning Network. The Lightning Network is not a product or company, it is a set of open standards that define a baseline for interoperability. As such, Lightning node software has been built by a variety of companies and community groups. The vast majority of Lightning software is _open source_, meaning that the source code is open and licensed in such a way as to enable collaboration, sharing and community participation in the development process. Similarly, the Lightning node implementations we will present in this chapter are all open source and are collaboratively developed.
|
|
|
|
Unlike Bitcoin, where the standard is defined by a _reference implementation_ in software (Bitcoin Core), in Lightning the standard is defined by a series of standards documents called _Basis of Lightning Technology (BOLT)_, found at the _lightning-rfc_ repository at:
|
|
|
|
https://github.com/lightningnetwork/lightning-rfc
|
|
|
|
There is no reference implementation of the Lightning Network, but there are several competing, BOLT-compliant and interoperable implementations developed by different teams and organizations. The teams that develop software for the Lightning Network also contribute in the development and evolution of the BOLT standards.
|
|
|
|
Another major difference between Lightning node software and Bitcoin node software is that Lightning nodes do not need to operate in "lockstep" with consensus rules and can have extended functionality beyond the baseline of the BOLTS. Therefore, different teams may pursue various experimental features that, if successful and broadly deployed, may become part of the BOLTs later.
|
|
|
|
In this chapter you will learn how to set up each of the software packages for the most popular Lightning node implementations. We've presented them in alphabetical order to emphasize that we generally do not prefer or endorse one over the other. Each has its strengths and weaknesses and choosing one will depend on a variety of factors. Since they are developed in different programming languages (e.g. Go, C, etc.) your choice may also depend on your level of familiarity and expertise with a specific language and development toolset.
|
|
|
|
=== Lightning Development Environment
|
|
|
|
((("development environment", "setup")))If you're a developer, you will want to set up a development environment with all the tools, libraries, and support software for writing and running Lightning software. In this highly technical chapter, we'll walk through that process step-by-step. If the material becomes too dense or you're not actually setting up a development environment, then feel free to skip to the next chapter, which is less technical.
|
|
|
|
==== Using the command-line
|
|
|
|
The examples in this chapter, and more broadly in most of this book, use a command-line terminal. That means that you type commands into a terminal and receive text responses. Furthermore, the examples are demonstrated on an operating system based on the Linux kernel and GNU software system, specifically the latest long-term stable release of Ubuntu (Ubuntu 20.04 LTS). The majority of the examples can be replicated on other operating systems such as Windows or Mac OS, with small modifications to the commands. The biggest difference between operating systems is the _package manager_ which installs the various software libraries and their pre-requisites. In the given examples, we will use +apt+, which is the package manager for Ubuntu. On Mac OS, a common package manager used for open source development is Homebrew (command +brew+) found at https://brew.sh.
|
|
|
|
In most of the examples here, we will be building the software directly from the source code. While this can be quite challenging, it gives us the most power and control. You may choose to use docker containers, pre-compiled packages or other installation mechanisms instead if you get stuck!
|
|
|
|
[TIP]
|
|
====
|
|
((("$ symbol")))((("shell commands")))((("terminal applications")))In many of the examples in this chapter we will be using the operating system's command-line interface (also known as a "shell"), accessed via a "terminal" application. The shell will first display a prompt as an indicator that it is ready for your command. Then you type a command and press "Enter" to which the shell responds with some text and a new prompt for your next command. The prompt may look different on your system, but in the following examples it is denoted by a +$+ symbol. In the examples, when you see text after a +$+ symbol, don't type the +$+ symbol but type the command immediately following it. Then press the Enter key to execute the command. In the examples, the lines below each command are the operating system's responses to that command. When you see the next +$+ prefix, you'll know it is a new command and you should repeat the process.
|
|
====
|
|
|
|
To keep things consistent, we use the +bash+ shell in all command-line examples. While other shells will behave in a similar way, and you will be able to run all the examples without it, some of the shell scripts are written specifically for the +bash+ shell and may require some changes or customizations to run in another shell. For consistency, you can install the +bash+ shell on Windows and Mac OS, and it comes installed by default on most Linux systems.
|
|
|
|
==== Downloading the book repository
|
|
|
|
All the code examples are available in the book's online repository. Because the repository will be kept up-to-date as much as possible, you should always look for the latest version in the online repository, instead of copying it from the printed book or the ebook.
|
|
|
|
You can download the repository as a ZIP bundle by visiting +github.com/lnbook/lnbook+ and selecting the green "Clone or Download" button on the right.
|
|
|
|
|
|
Alternatively, you can use the +git+ command to create a version-controlled clone of the repository on your local computer. Git is a distributed version control system that is used by most developers to collaborate on software development and track changes to software repositories. Donwload and install +git+ by following the instructions on https://git-scm.com/.
|
|
|
|
|
|
To make a local copy of the repository on your computer, run the git command as follows:
|
|
|
|
[git-clone-lnbook]
|
|
----
|
|
$ git clone https://github.com/lnbook/lnbook.git
|
|
----
|
|
|
|
You now have a complete copy of the book repository in a folder called +lnbook+. All subsequent examples will assume that you are running commands from that folder.
|
|
|
|
=== Docker Containers
|
|
|
|
Many developers use a _container_, which is a type of virtual machine, to install a pre-configured operating system and applications with all the necessary dependencies. Much of the Lightning software can also be installed using a container system such as _Docker_ (command +docker+) found at https://docker.com. Container installations are a lot easier, especially for those who are not used to a command-line environment.
|
|
|
|
The book's repository contains a collection of docker containers that can be used to set up a consistent development environment to practice and replicate the examples on any system. Because the container is a complete operating system that runs with a consistent configuration, you can be sure that the examples will work on your computer and need not worry about dependencies, library versions or differences in configuration.
|
|
|
|
Docker containers are often optimized to be small, i.e. occupy the minimum disk space. However, in this book we are using containers to _standardize_ the environment and make it consistent for all readers. Furthermore, these containers are not meant to be used to run services in the background. Instead, they are meant to be used to test the examples and learn by interacting with the software. For these reasons, the containers are quite large and come with a lot of development tools and utilities. Commonly the Alpine distribution is used for Linux containers due to their reduced size. Nonetheless, we provide containers built on Ubuntu because more developers are familiar with Ubuntu, and this familiarity is more important to us than size.
|
|
|
|
You can find the latest container definitions and build configurations in the book's repository under the +code/docker+ folder. Each container is in a separate folder as can be seen below:
|
|
|
|
[tree]
|
|
----
|
|
$ tree -F --charset=asciii code
|
|
----
|
|
|
|
[docker-dir-list]
|
|
----
|
|
code
|
|
`-- docker/
|
|
|-- Makefile
|
|
|-- bitcoind/
|
|
| |-- Dockerfile
|
|
| |-- bashrc
|
|
| |-- bitcoind/
|
|
| | |-- bitcoin.conf
|
|
| | `-- keys/
|
|
| | |-- demo_address.txt
|
|
| | |-- demo_mnemonic.txt
|
|
| | `-- demo_privkey.txt
|
|
| |-- bitcoind-entrypoint.sh
|
|
| `-- mine.sh*
|
|
|-- c-lightning/
|
|
| |-- Dockerfile
|
|
| |-- bashrc
|
|
| |-- c-lightning-entrypoint.sh
|
|
| |-- fund-c-lightning.sh
|
|
| |-- lightningd/
|
|
| | `-- config
|
|
| |-- logtail.sh
|
|
| `-- wait-for-bitcoind.sh
|
|
|-- docker-compose.yml
|
|
|-- eclair/
|
|
| |-- Dockerfile
|
|
| |-- bashrc
|
|
| |-- eclair/
|
|
| | `-- eclair.conf
|
|
| |-- eclair-entrypoint.sh
|
|
| |-- logtail.sh
|
|
| `-- wait-for-bitcoind.sh
|
|
|-- lnd/
|
|
| |-- Dockerfile
|
|
| |-- bashrc
|
|
| |-- fund-lnd.sh
|
|
| |-- lnd/
|
|
| | `-- lnd.conf
|
|
| |-- lnd-entrypoint.sh
|
|
| |-- logtail.sh
|
|
| `-- wait-for-bitcoind.sh
|
|
`-- setup-channels.sh
|
|
----
|
|
|
|
==== Installing Docker
|
|
|
|
Before we begin, you should install the docker container system on your computer. Docker is an open system that is distributed for free as a _Community Edition_ for many different operating systems including Windows, Mac OS and Linux. The Windows and Mac versions are called _Docker Desktop_ and consist of a GUI desktop application and command-line tools. The Linux version is called _Docker Engine_ and is comprised of a server daemon and command-line tools. We will be using the command-line tools, which are identical across all platforms.
|
|
|
|
Go ahead and install Docker for your operating system by following the instructions to _"Get Docker"_ from the Docker website found here:
|
|
|
|
https://docs.docker.com/get-docker/
|
|
|
|
Select your operating system from the list and follow the installation instructions.
|
|
|
|
[TIP]
|
|
====
|
|
If you install on Linux, follow the post-installation instructions to ensure you can run Docker as a regular user instead of user _root_. Otherwise, you will need to prefix all +docker+ commands with +sudo+, running them as root like: +sudo docker+.
|
|
====
|
|
|
|
Once you have Docker installed, you can test your installation by running the demo container +hello-world+ like this:
|
|
|
|
[docker-hello-world]
|
|
----
|
|
$ docker run hello-world
|
|
|
|
Hello from Docker!
|
|
This message shows that your installation appears to be working correctly.
|
|
|
|
[...]
|
|
----
|
|
|
|
==== Basic docker commands
|
|
|
|
In this chapter we use +docker+ quite extensively. We will be using the following +docker+ commands and arguments:
|
|
|
|
*Building a container*
|
|
|
|
----
|
|
docker build [-t tag] [directory]
|
|
----
|
|
|
|
...where +tag+ is how we identify the container we are building, and +directory+ is where the container's "context" (folders and files) and definition file (+Dockerfile+) are found.
|
|
|
|
*Running a container*
|
|
|
|
----
|
|
docker run -it [--network netname] [--name cname] tag
|
|
----
|
|
|
|
...where +netname+ is the name of a docker network, +cname+ is the name we choose for this container instance and +tag+ is the name tag we gave the container when we built it.
|
|
|
|
*Executing a command in a container*
|
|
|
|
----
|
|
docker exec cname command
|
|
----
|
|
|
|
...where +cname+ is the name we gave the container in the +run+ command, and +command+ is an executable or script that we want to run inside the container.
|
|
|
|
*Stopping and starting a container*
|
|
|
|
In most cases, if we are running a container in an _interactive_ as well as _terminal_ mode, i.e. with the +i+ and +t+ flags (combined as +-it+) set, the container can be stopped by simply pressing +CTRL-C+ or by exiting the shell with +exit+ or +CTRL-D+. If a container does not terminate, you can stop it from another terminal like this:
|
|
|
|
----
|
|
docker stop cname
|
|
----
|
|
|
|
To resume an already existing container use the `start` command, like so:
|
|
|
|
----
|
|
docker start cname
|
|
----
|
|
|
|
*Deleting a container by name*
|
|
|
|
If you name a container instead of letting docker name it randomly, you cannot reuse that name until the container is deleted. Docker will return an error like this:
|
|
[source,bash]
|
|
----
|
|
docker: Error response from daemon: Conflict. The container name "/bitcoind" is already in use...
|
|
----
|
|
|
|
To fix this, delete the existing instance of the container:
|
|
|
|
----
|
|
docker rm cname
|
|
----
|
|
|
|
...where +cname+ is the name assigned to the container (+bitcoind+ in the example error message)
|
|
|
|
*List running containers*
|
|
|
|
----
|
|
docker ps
|
|
----
|
|
|
|
These basic docker commands will be enough to get you started and will allow you to run all the examples in this chapter. Let's see them in action in the following sections.
|
|
|
|
=== Bitcoin Core and Regtest
|
|
|
|
Most of the Lightning node implementations need access to a full Bitcoin node in order to work.
|
|
|
|
Installing a full Bitcoin node and synching the Bitcoin blockchain is outside the scope of this book and is a relatively complex endeavor in itself. If you want to try it, refer to _Mastering Bitcoin_ (https://github.com/bitcoinbook/bitcoinbook), "Chapter 3: Bitcoin Core: The Reference Implementation" which discusses the installation and operation of a Bitcoin node.
|
|
|
|
A Bitcoin node can be operated in _regtest_ mode, where the node creates a local simulated Bitcoin blockchain for testing purposes. In the following examples we will be using the +regtest+ mode to allow us to demonstrate Lightning without having to synchronize a Bitcoin node or risk any funds.
|
|
|
|
The container for Bitcoin Core is +bitcoind+. It is configured to run Bitcoin Core in +regtest+ mode and to mine a new block every 10 seconds. Its RPC port is exposed on port 18443 and accessible for RPC calls with the username +regtest+ and the password +regtest+. You can also access it with an interactive shell and run +bitcoin-cli+ commands locally.
|
|
|
|
===== Building the Bitcoin Core Container
|
|
|
|
Let us start by building and running the +bitcoind+ container. First, we use the +docker build+ command to build it:
|
|
|
|
[source,bash]
|
|
----
|
|
$ cd code/docker
|
|
$ docker build -t lnbook/bitcoind bitcoind
|
|
Sending build context to Docker daemon 12.29kB
|
|
Step 1/25 : FROM ubuntu:20.04 AS bitcoind-base
|
|
---> c3c304cb4f22
|
|
Step 2/25 : RUN apt update && apt install -yqq curl gosu jq bash-completion
|
|
|
|
[...]
|
|
|
|
Step 25/25 : CMD ["/usr/local/bin/mine.sh"]
|
|
---> Using cache
|
|
---> 758051998e72
|
|
Successfully built 758051998e72
|
|
Successfully tagged lnbook/bitcoind:latest
|
|
----
|
|
|
|
===== Running the Bitcoin Core Container
|
|
|
|
Next, let's run the +bitcoind+ container and have it mine some blocks. We use the +docker run+ command, with the flags for _interactive (i)_ and _terminal (t)_, and the +name+ argument to give the running container a custom name:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker run -it --name bitcoind lnbook/bitcoind
|
|
Starting bitcoind...
|
|
Bitcoin Core starting
|
|
bitcoind started
|
|
================================================
|
|
Importing demo private key
|
|
Bitcoin address: 2NBKgwSWY5qEmfN2Br4WtMDGuamjpuUc5q1
|
|
Private key: cSaejkcWwU25jMweWEewRSsrVQq2FGTij1xjXv4x1XvxVRF1ZCr3
|
|
================================================
|
|
|
|
Mining 101 blocks to unlock some bitcoin
|
|
[
|
|
"579311009cc4dcf9d4cc0bf720bf210bfb0b4950cdbda0670ff56f8856529b39",
|
|
...
|
|
"33e0a6e811d6c49219ee848604cedceb0ab161485e1195b1f3576049e4d5deb7"
|
|
]
|
|
Mining 1 block every 10 seconds
|
|
[
|
|
"5974aa6da1636013daeaf730b5772ae575104644b8d6fa034203d2bf9dc7a98b"
|
|
]
|
|
Balance: 100.00000000
|
|
----
|
|
|
|
As you can see, bitcoind starts up and mines 101 simulated blocks to get the chain started. This is because under the bitcoin consensus rules, newly mined bitcoin is not spendable until 100 blocks have elapsed. By mining 101 blocks, we make the first block's coinbase spendable. After that initial mining activity, a new block is mined every 10 seconds to keep the chain moving forward.
|
|
|
|
For now, there are no transactions. But we have some test bitcoin that has been mined in the wallet and is available to spend. When we connect some Lightning nodes to this chain, we will send some bitcoin to their wallets so that we can open some Lightning channels between the Lightning nodes.
|
|
|
|
===== Interacting with the Bitcoin Core Container
|
|
|
|
In the mean time, we can also interact with the +bitcoind+ container by sending it shell commands. The container is sending a log file to the terminal, displaying the mining process of the bitcoind process. To interact with the shell we can issue commands in another terminal, using the +docker exec+ command. Since we previously named the running container with the +name+ argument, we can refer to it by that name when we run the +docker exec+ command. First, let's run an interactive +bash+ shell:
|
|
|
|
----
|
|
$ docker exec -it bitcoind /bin/bash
|
|
root@e027fd56e31a:/bitcoind# ps x
|
|
PID TTY STAT TIME COMMAND
|
|
1 pts/0 Ss+ 0:00 /bin/bash /usr/local/bin/mine.sh
|
|
7 ? Ssl 0:03 bitcoind -datadir=/bitcoind -daemon
|
|
97 pts/1 Ss 0:00 /bin/bash
|
|
124 pts/0 S+ 0:00 sleep 10
|
|
125 pts/1 R+ 0:00 ps x
|
|
root@e027fd56e31a:/bitcoind#
|
|
----
|
|
|
|
Running the interactive shell puts us "inside" the container. It logs in as user +root+, as we can see from the prefix +root@+ in the new shell prompt +root@e027fd56e31a:/bitcoind#+. If we issue the +ps x+ command to see what processes are running, we see both +bitcoind+ and the script +mine.sh+ are running in the background. To exit this shell, type +CTRL-D+ or +exit+ and you will be returned to your operating system prompt.
|
|
|
|
Instead of running an interactive shell, we can also issue a single command that is executed inside the container. In the following example we run the +bitcoin-cli+ command to obtain information about the current blockchain state:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker exec bitcoind bitcoin-cli -datadir=/bitcoind getblockchaininfo
|
|
{
|
|
"chain": "regtest",
|
|
"blocks": 149,
|
|
"headers": 149,
|
|
"bestblockhash": "35e97bf507607be010be1daa10152e99535f7b0f9882d0e588c0037d8d9b0ba1",
|
|
"difficulty": 4.656542373906925e-10,
|
|
[...]
|
|
"warnings": ""
|
|
}
|
|
$
|
|
----
|
|
|
|
As you can see, we need to tell +bitcoin-cli+ where the bitcoind data directory is by using the +datadir+ argument. We can then issue RPC commands to the Bitcoin Core node and get JSON encoded results.
|
|
|
|
All our docker containers have a command-line JSON encoder/decoder named +jq+ preinstalled. +jq+ helps us to process JSON-formatted data via the command-line or from inside scripts. You can send the JSON output of any command to +jq+ using the +|+ character. This character as well as this operation is called a "pipe". Let's apply a +pipe+ and +jq+ to the previous command as follows:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker exec bitcoind bash -c "bitcoin-cli -datadir=/bitcoind getblockchaininfo | jq .blocks"
|
|
189
|
|
----
|
|
|
|
+jq .blocks+ instructs the +jq+ JSON decoder to extract the field +blocks+ from the +getblockchaininfo+ result. In our case, it extracts and prints the value of 189 which we could use in a subsequent command.
|
|
|
|
As you will see in the following sections, we can run several containers at the same time and then interact with them individually. We can issue commands to extract information such as the Lightning node public key or to take actions such as opening a Lightning channel to another node. The +docker run+ and +docker exec+ commands together with +jq+ for JSON decoding are all we need to build a working Lightning Network that mixes many different node implementations. This enables us to try out diverse experiments on our own computer.
|
|
|
|
=== The c-lightning Lightning node project
|
|
|
|
C-lightning is a lightweight, highly customizable, and standard-compliant implementation of the Lightning Network protocol, developed by Blockstream as part of the Elements project. The project is open source and developed collaboratively on Github:
|
|
|
|
https://github.com/ElementsProject/lightning
|
|
|
|
In the following sections, we will build a docker container that runs a c-lightning node connecting to the bitcoind container we build previously. We will also show you how to configure and build the c-lightning software directly from the source code.
|
|
|
|
==== Building c-lightning as a Docker container
|
|
|
|
The c-lightning software distribution has a docker container, but it is designed for running c-lightning in production systems and along side a bitcoind node. We will be using a somewhat simpler container configured to run c-lightning for demonstration purposes.
|
|
|
|
We start by building the c-lightning docker container from the book's files which you previously downloaded into a directory named +lnbook+. As before, we will use the +docker build+ command in the +code/docker+ sub-directory. We will tag the container image with the tag +lnbook/c-lightning+ like this:
|
|
|
|
[source,bash]
|
|
----
|
|
$ cd code/docker
|
|
$ docker build -t lnbook/c-lightning c-lightning
|
|
Sending build context to Docker daemon 10.24kB
|
|
Step 1/21 : FROM lnbook/bitcoind AS c-lightning-base
|
|
---> 758051998e72
|
|
Step 2/21 : RUN apt update && apt install -yqq software-properties-common
|
|
|
|
[...]
|
|
|
|
Step 21/21 : CMD ["/usr/local/bin/logtail.sh"]
|
|
---> Using cache
|
|
---> e63f5aaa2b16
|
|
Successfully built e63f5aaa2b16
|
|
Successfully tagged lnbook/c-lightning:latest
|
|
----
|
|
|
|
Our container is now built and ready to run. However, before we run the c-lightning container, we need to start the bitcoind container in another terminal as c-lightning depends on bitcoind. We will also need to set up a docker network that allows the containers to connect to each other as if residing on the same local area network.
|
|
|
|
[TIP]
|
|
====
|
|
Docker containers can "talk" to each other over a virtual local area network managed by the docker system. Each container can have a custom name and other containers can use that name to resolve its IP address and easily connect to it.
|
|
====
|
|
|
|
==== Setting up a docker network
|
|
|
|
Once a docker network is set up, docker will activate the network on our local computer every time docker starts, e.g. after rebooting. So we only need to set up a network once by using the +docker network create+ command. The network name itself is not important, but it has to be unique on our computer. By default, docker has three networks named +host+, +bridge+, and +none+. We will name our new network +lnbook+ and create it like this:
|
|
|
|
----
|
|
$ docker network create lnbook
|
|
ad75c0e4f87e5917823187febedfc0d7978235ae3e88eca63abe7e0b5ee81bfb
|
|
$ docker network ls
|
|
NETWORK ID NAME DRIVER SCOPE
|
|
7f1fb63877ea bridge bridge local
|
|
4e575cba0036 host host local
|
|
ad75c0e4f87e lnbook bridge local
|
|
ee8824567c95 none null local
|
|
----
|
|
|
|
As you can see, running +docker network ls+ gives us a listing of the docker networks. Our +lnbook+ network has been created. We can ignore the network ID, as it is automatically managed.
|
|
|
|
==== Running the bitcoind and c-lightning containers
|
|
|
|
The next step is to start the bitcoind and c-lightning containers and connect them to the +lnbook+ network. To run a container in a specific network, we must pass the +network+ argument to +docker run+. To make it easy for containers to find each other, we will also give each one a name with the +name+ argument. We start bitcoind like this:
|
|
|
|
----
|
|
$ docker run -it --network lnbook --name bitcoind lnbook/bitcoind
|
|
----
|
|
|
|
You should see bitcoind start up and start mining blocks every 10 seconds. Leave it running and open a new terminal window to start c-lightning. We use a similar +docker run+ command with the +network+ and +name+ arguments to start c-lightning as follows:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker run -it --network lnbook --name c-lightning lnbook/c-lightning
|
|
Waiting for bitcoind to start...
|
|
Waiting for bitcoind to mine blocks...
|
|
Starting c-lightning...
|
|
[...]
|
|
Startup complete
|
|
Funding c-lightning wallet
|
|
{"result":"e1a392ce2c6af57f8ef1550ccb9a120c14b454da3a073f556b55dc41592621bb","error":null,"id":"c-lightning-container"}
|
|
[...]
|
|
2020-06-22T14:26:09.802Z DEBUG lightningd: Opened log file /lightningd/lightningd.log
|
|
|
|
----
|
|
|
|
The c-lightning container starts up and connects to the bitcoind container over the docker network. First, our c-lightning node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet. Finally, as part of the container startup, a script will send an RPC command to the bitcoind node which creates a transaction that funds the c-lightning wallet with 10 test BTC. Now our c-lightning node is not only running, but it even has some test bitcoin to play with!
|
|
|
|
As we demonstrated with the bitcoind container, we can issue commands to our c-lightning container in another terminal in order to extract information, open channels etc. The command that allows us to issue command-line instructions to the c-lightning node is called +lightning-cli+. To get the node info use the following +docker exec+ command in another terminal window:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker exec c-lightning lightning-cli getinfo
|
|
{
|
|
"id": "025656e4ef0627bc87638927b8ad58a0e07e8d8d6e84a5699a5eb27b736d94989b",
|
|
"alias": "HAPPYWALK",
|
|
"color": "025656",
|
|
"num_peers": 0,
|
|
"num_pending_channels": 0,
|
|
"num_active_channels": 0,
|
|
"num_inactive_channels": 0,
|
|
"address": [],
|
|
"binding": [
|
|
{
|
|
"type": "ipv6",
|
|
"address": "::",
|
|
"port": 9735
|
|
},
|
|
{
|
|
"type": "ipv4",
|
|
"address": "0.0.0.0",
|
|
"port": 9735
|
|
}
|
|
],
|
|
"version": "0.8.2.1",
|
|
"blockheight": 140,
|
|
"network": "regtest",
|
|
"msatoshi_fees_collected": 0,
|
|
"fees_collected_msat": "0msat",
|
|
"lightning-dir": "/lightningd/regtest"
|
|
}
|
|
|
|
----
|
|
|
|
We now have our first Lightning node running on a virtual network and communicating with a test bitcoin blockchain. Later in this chapter we will start more nodes and connect them to each other to make some Lightning payments.
|
|
|
|
In the next section we will also look at how to download, configure and compile c-lightning directly from the source code. This is an optional and advanced step that will teach you how to use the build tools and allow you to make modifications to c-lighting source code. With this knowledge you can write some code, fix some bugs, or create a plugin for c-lightning. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
|
|
|
|
==== Installing c-lightning from source code
|
|
|
|
The c-lightning developers have provided detailed instructions for building c-lightning from source code. We will be following the instructions here:
|
|
|
|
https://github.com/ElementsProject/lightning/blob/master/doc/INSTALL.md
|
|
|
|
==== Installing prerequisite libraries and packages
|
|
|
|
The common first step is the installation of prerequisite libraries. We use the +apt+ package manager to install these:
|
|
|
|
[source,bash]
|
|
----
|
|
$ sudo apt-get update
|
|
|
|
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
|
|
Hit:2 http://eu-north-1b.clouds.archive.ubuntu.com/ubuntu bionic InRelease
|
|
Get:3 http://eu-north-1b.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
|
|
|
|
[...]
|
|
|
|
Fetched 18.3 MB in 8s (2,180 kB/s)
|
|
Reading package lists... Done
|
|
|
|
$ sudo apt-get install -y \
|
|
autoconf automake build-essential git libtool libgmp-dev \
|
|
libsqlite3-dev python python3 python3-mako net-tools zlib1g-dev \
|
|
libsodium-dev gettext
|
|
|
|
Reading package lists... Done
|
|
Building dependency tree
|
|
Reading state information... Done
|
|
The following additional packages will be installed:
|
|
autotools-dev binutils binutils-common binutils-x86-64-linux-gnu cpp cpp-7 dpkg-dev fakeroot g++ g++-7 gcc gcc-7 gcc-7-base libalgorithm-diff-perl
|
|
|
|
[...]
|
|
|
|
Setting up libsigsegv2:amd64 (2.12-2) ...
|
|
Setting up libltdl-dev:amd64 (2.4.6-14) ...
|
|
Setting up python2 (2.7.17-2ubuntu4) ...
|
|
Setting up libsodium-dev:amd64 (1.0.18-1) ...
|
|
|
|
[...]
|
|
$
|
|
----
|
|
|
|
After a few minutes and a lot of on-screen activity, you will have installed all the necessary packages and libraries. Many of these libraries are also used by other Lightning packages and needed for software development in general.
|
|
|
|
==== Copying the c-lightning source code
|
|
|
|
Next, we will copy the latest version of c-lightning from the source code repository. To do this, we will use the +git clone+ command which clones a version-controlled copy onto your local machine thereby allowing you to keep it synchronized with subsequent changes without having to download the whole repository again:
|
|
|
|
[source,bash]
|
|
----
|
|
$ git clone https://github.com/ElementsProject/lightning.git
|
|
Cloning into 'lightning'...
|
|
remote: Enumerating objects: 24, done.
|
|
remote: Counting objects: 100% (24/24), done.
|
|
remote: Compressing objects: 100% (22/22), done.
|
|
remote: Total 53192 (delta 5), reused 5 (delta 2), pack-reused 53168
|
|
Receiving objects: 100% (53192/53192), 29.59 MiB | 19.30 MiB/s, done.
|
|
Resolving deltas: 100% (39834/39834), done.
|
|
|
|
$ cd lightning
|
|
|
|
----
|
|
|
|
We now have a copy of c-lightning cloned into the +lightning+ subfolder, and we have used the +cd+ (change directory) command to enter that subfolder.
|
|
|
|
==== Compiling the c-lightning source code
|
|
|
|
Next, we use a set of _build scripts_ that are commonly available in many open source projects. These _build scripts_ use the +configure+ and +make+ commands which allow us to:
|
|
|
|
* Select the build options and check necessary dependencies (+configure+).
|
|
* Build and install the executables and libraries (+make+).
|
|
|
|
Running the +configure+ with the +help+ option will show us all the available options:
|
|
|
|
[source,bash]
|
|
----
|
|
$ ./configure --help
|
|
Usage: ./configure [--reconfigure] [setting=value] [options]
|
|
|
|
Options include:
|
|
--prefix= (default /usr/local)
|
|
Prefix for make install
|
|
--enable/disable-developer (default disable)
|
|
Developer mode, good for testing
|
|
--enable/disable-experimental-features (default disable)
|
|
Enable experimental features
|
|
--enable/disable-compat (default enable)
|
|
Compatibility mode, good to disable to see if your software breaks
|
|
--enable/disable-valgrind (default (autodetect))
|
|
Run tests with Valgrind
|
|
--enable/disable-static (default disable)
|
|
Static link sqlite3, gmp and zlib libraries
|
|
--enable/disable-address-sanitizer (default disable)
|
|
Compile with address-sanitizer
|
|
----
|
|
|
|
We don't need to change any of the defaults for this example. Hence we run +configure+ again without any options to use the defaults:
|
|
|
|
----
|
|
$ ./configure
|
|
|
|
Compiling ccan/tools/configurator/configurator...done
|
|
checking for python3-mako... found
|
|
Making autoconf users comfortable... yes
|
|
checking for off_t is 32 bits... no
|
|
checking for __alignof__ support... yes
|
|
|
|
[...]
|
|
|
|
Setting COMPAT... 1
|
|
PYTEST not found
|
|
Setting STATIC... 0
|
|
Setting ASAN... 0
|
|
Setting TEST_NETWORK... regtest
|
|
$
|
|
----
|
|
|
|
Next, we use the +make+ command to build the libraries, components, and executables of the c-lightning project. This part will take several minutes to complete and will use your computer's CPU and disk heavily. Expect some noise from the fans! Run +make+:
|
|
|
|
[source,bash]
|
|
----
|
|
$ make
|
|
|
|
cc -DBINTOPKGLIBEXECDIR="\"../libexec/c-lightning\"" -Wall -Wundef -Wmis...
|
|
|
|
[...]
|
|
|
|
cc -Og ccan-asort.o ccan-autodata.o ccan-bitmap.o ccan-bitops.o ccan-...
|
|
|
|
----
|
|
|
|
If all goes well, you will not see any +ERROR+ message stopping the execution of the above command. The c-lightning software package has been compiled from source and we are now ready to install the executable components we created in the previous step:
|
|
|
|
----
|
|
$ sudo make install
|
|
|
|
mkdir -p /usr/local/bin
|
|
mkdir -p /usr/local/libexec/c-lightning
|
|
mkdir -p /usr/local/libexec/c-lightning/plugins
|
|
mkdir -p /usr/local/share/man/man1
|
|
mkdir -p /usr/local/share/man/man5
|
|
mkdir -p /usr/local/share/man/man7
|
|
mkdir -p /usr/local/share/man/man8
|
|
mkdir -p /usr/local/share/doc/c-lightning
|
|
install cli/lightning-cli lightningd/lightningd /usr/local/bin
|
|
[...]
|
|
----
|
|
|
|
In order to verify that the +lightningd+ and +lightning-cli+ commands have been installed correctly we will ask each executable for its version information:
|
|
|
|
----
|
|
$ lightningd --version
|
|
v0.8.1rc2
|
|
$ lightning-cli --version
|
|
v0.8.1rc2
|
|
----
|
|
|
|
You may see a different version from that shown above as the software continues to evolve long after this book is published. However, no matter what version you see, the fact that the commands execute and respond with version information means that you have succeeded in building the c-lightning software.
|
|
|
|
=== The Lightning Network Daemon (LND) node project
|
|
|
|
The Lightning Network Daemon (LND) is a complete implementation of a Lightning Network node by Lightning Labs. The LND project provides a number of executable applications, including +lnd+ (the daemon itself) and +lncli+ (the command-line utility). LND has several pluggable back-end chain services including btcd (a full-node), bitcoind (Bitcoin Core), and neutrino (a new experimental light client). LND is written in the Go programming language. The project is open source and developed collaboratively on Github:
|
|
|
|
https://github.com/LightningNetwork/lnd
|
|
|
|
In the next few sections we will build a docker container to run LND, build LND from source code, and learn how to configure and run LND.
|
|
|
|
==== Building LND as a docker container
|
|
|
|
If you have followed the previous examples in this chapter, you should be quite familiar with the basic docker commands by now. In this section we will repeat them to build the LND container. The container is located in +code/docker/lnd+. We issue commands in a terminal to change the working directory to +code/docker+ and perform the +docker build+ command:
|
|
|
|
----
|
|
$ cd code/docker
|
|
$ docker build -t lnbook/lnd lnd
|
|
Sending build context to Docker daemon 9.728kB
|
|
Step 1/29 : FROM golang:1.13 as lnd-base
|
|
---> e9bdcb0f0af9
|
|
Step 2/29 : ENV GOPATH /go
|
|
|
|
[...]
|
|
|
|
Step 29/29 : CMD ["/usr/local/bin/logtail.sh"]
|
|
---> Using cache
|
|
---> 397ce833ce14
|
|
Successfully built 397ce833ce14
|
|
Successfully tagged lnbook/lnd:latest
|
|
|
|
----
|
|
|
|
Our container is now built and ready to run. As with the c-lightning container we built previously, the LND container also depends on a running instance of Bitcoin Core. As before, we need to start the bitcoind container in another terminal and connect LND to it via a docker network. We have already set up a docker network called +lnbook+ previously and will be using that again here.
|
|
|
|
[TIP]
|
|
====
|
|
Normally, each node operator runs its own Lightning node and its own Bitcoin node on their own server. For us a single bitcoind container can serve many Lightning nodes. On our simulated network we can run several Lightning nodes, all connecting to a single Bitcoin node in regtest mode.
|
|
====
|
|
|
|
==== Running the bitcoind and LND containers
|
|
|
|
As before, we start the bitcoind container in one terminal and LND in another. If you already have the bitcoind container running, you do not need to restart it. Just leave it running and skip the next step. To start bitcoind in the +lnbook+ network we use +docker run+ like this:
|
|
|
|
----
|
|
$ docker run -it --network lnbook --name bitcoind lnbook/bitcoind
|
|
----
|
|
|
|
Next, we start the LND container we just built. As done before we need to attach it to the +lnbook+ network and give it a name:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker run -it --network lnbook --name lnd lnbook/lnd
|
|
Waiting for bitcoind to start...
|
|
Waiting for bitcoind to mine blocks...
|
|
Starting lnd...
|
|
Startup complete
|
|
Funding lnd wallet
|
|
{"result":"795a8f4fce17bbab35a779e92596ba0a4a1a99aec493fa468a349c73cb055e99","error":null,"id":"lnd-run-container"}
|
|
|
|
[...]
|
|
|
|
2020-06-23 13:42:51.841 [INF] LTND: Active chain: Bitcoin (network=regtest)
|
|
|
|
----
|
|
|
|
The LND container starts up and connects to the bitcoind container over the docker network. First, our LND node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet. Finally, as part of the container startup, a script will send an RPC command to the bitcoind node thereby creating a transaction that funds the LND wallet with 10 test BTC.
|
|
|
|
As we demonstrated previously, we can issue commands to our container in another terminal in order to extract information, open channels etc. The command that allows us to issue command-line instructions to the +lnd+ daemon is called +lncli+. Let's get the node info using the +docker exec+ command in another terminal window:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker exec lnd lncli -n regtest getinfo
|
|
{
|
|
"version": "0.10.99-beta commit=clock/v1.0.0-85-gacc698a6995b35976950282b29c9685c993a0364",
|
|
"commit_hash": "acc698a6995b35976950282b29c9685c993a0364",
|
|
"identity_pubkey": "03e436739ec70f3c3630a62cfe3f4b6fd60ccf1f0b69a0036f73033c1ac309426e",
|
|
"alias": "03e436739ec70f3c3630",
|
|
"color": "#3399ff",
|
|
"num_pending_channels": 0,
|
|
"num_active_channels": 0,
|
|
"num_inactive_channels": 0,
|
|
[...]
|
|
}
|
|
----
|
|
|
|
We now have another Lightning node running on the +lnbook+ network and communicating with bitcoind. If you are still running the c-lightning container, then there are now two nodes running. They're not yet connected to each other, but we will be connecting them to each other soon.
|
|
|
|
If desired, you can run any combination of LND and c-lightning nodes on the same Lightning network. For example, to run a second LND node you would issue the +docker run+ command with a different container name like so:
|
|
|
|
----
|
|
$ docker run -it --network lnbook --name lnd2 lnbook/lnd
|
|
----
|
|
|
|
In the command above, we start another LND container, naming it +lnd2+. The names are entirely up to you, as long as they are unique. If you don't provide a name, docker will construct a unique name by randomly combining two English words such as "naughty_einstein". This was the actual name docker chose for us when we wrote this paragraph. How funny!
|
|
|
|
In the next section we will look at how to download and compile LND directly from the source code. This is an optional and advanced step that will teach you how to use the Go language build tools and allow you to make modifications to LND source code. With this knowledge you can write some code or fix some bugs. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
|
|
|
|
==== Installing LND from source code
|
|
|
|
In this section we will build LND from scratch. LND is written in the Go programming language. IF you want to find out more about Go, search for +golang+ instead of +go+ to avoid irrelevant results. Because it is written in Go and not C or C++, it uses a different "build" framework than the GNU autotools/make framework we saw used in c-lightning previously. Don't fret though, it is quite easy to install and use the golang tools and we will show each step here. Go is a fantastic language for collaborative software development as it produces very consistent, precise, and easy to read code regardless of the number of authors. Go is focused and "minimalist" in a way that encourages consistency across versions of the language. As a compiled language, it is also quite efficient. Let's dive in.
|
|
|
|
We will follow the installation instructions found in the LND project documentation:
|
|
|
|
https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md
|
|
|
|
First, we will install the +golang+ package and associated libraries. We strictly require Go version 1.13 or later. The official Go language packages are distributed as binaries from https://golang.org/dl. For convenience they are also packaged as Debian packages available through the +apt+ command. You can follow the instructions on https://golang.org/dl or use the +apt+ commands below on a Debian/Ubuntu Linux system as described on https://github.com/golang/go/wiki/Ubuntu:
|
|
|
|
----
|
|
$ sudo apt install golang-go
|
|
----
|
|
|
|
Check that you have the correct version installed and ready to use by running:
|
|
|
|
----
|
|
$ go version
|
|
go version go1.13.4 linux/amd64
|
|
----
|
|
|
|
We have 1.13.4, so we're ready to... Go! Next we need to tell any programs where to find the Go code. This is accomplished by setting the environment variable +GOPATH+. Usually the Go code is located in a directory named +gocode+ directly in the user's home directory. With the following two commands we consistently set the +GOPATH+ and make sure your shell adds it to your executable +PATH+. Note that the user's home directory is referred to as +~+ in the shell.
|
|
|
|
----
|
|
$ export GOPATH=~/gocode
|
|
$ export PATH=$PATH:$GOPATH/bin
|
|
----
|
|
|
|
To avoid having to set these environment variables every time you open a shell, you can add those two lines to the end of your bash shell configuration file +.bashrc+ in your home directory, using the editor of your choice.
|
|
|
|
==== Copying the LND source code
|
|
|
|
As with many open source projects nowadays, the source code for LND is on Github (www.github.com). The +go get+ command can fetch it directly using the git protocol:
|
|
|
|
----
|
|
$ go get -d github.com/lightningnetwork/lnd
|
|
----
|
|
|
|
Once +go get+ finishes, you will have a sub-directory under +GOPATH+ that contains the LND source code.
|
|
|
|
==== Compiling the LND source code
|
|
|
|
LND uses the +make+ build system. To build the project, we change directory to LND's source code and then use +make+ like this:
|
|
|
|
----
|
|
$ cd $GOPATH/src/github.com/lightningnetwork/lnd
|
|
$ make && make install
|
|
----
|
|
|
|
After several minutes you will have two new commands +lnd+ and +lncli+ installed. Try them out and check their version to ensure they are installed:
|
|
|
|
[source,bash]
|
|
----
|
|
$ lnd --version
|
|
lnd version 0.10.99-beta commit=clock/v1.0.0-106-gc1ef5bb908606343d2636c8cd345169e064bdc91
|
|
$ lncli --version
|
|
lncli version 0.10.99-beta commit=clock/v1.0.0-106-gc1ef5bb908606343d2636c8cd345169e064bdc91
|
|
----
|
|
|
|
You will likely see a different version from that shown above, as the software continues to evolve long after this book is published. However, no matter what version you see, the fact that the commands execute and show you version information means that you have succeeded in building the LND software.
|
|
|
|
=== The Eclair Lightning node project
|
|
|
|
Eclair (French for Lightning) is a Scala implementation of the Lightning Network made by ACINQ. Eclair is also one of the most popular and pioneering mobile Lightning wallets which we used to demonstrate a Lightning payment in the second chapter. In this section we examine the Eclair server project which runs a Lightning node. Eclair is an open source project and can be found on GitHub:
|
|
|
|
https://github.com/ACINQ/eclair
|
|
|
|
|
|
In the next few sections we will build a docker container to run Eclair, as we did previously with c-lightning and LND. We will also build Eclair directly from the source code.
|
|
|
|
==== Building Eclair as a Docker container
|
|
|
|
By now, you are almost an expert in the basic operations of docker! In this section we will repeat many of the previously seen commands to build the Eclair container. The container is located in +code/docker/eclair+. We start in a terminal, by switching the working directory to +code/docker+ and issuing the +docker build+ command:
|
|
|
|
[source,bash]
|
|
----
|
|
$ cd code/docker
|
|
$ docker build -t lnbook/eclair eclair
|
|
Sending build context to Docker daemon 9.216kB
|
|
Step 1/22 : FROM ubuntu:20.04 AS eclair-base
|
|
---> c3c304cb4f22
|
|
Step 2/22 : RUN apt update && apt install -yqq curl gosu jq bash-completion
|
|
---> Using cache
|
|
---> 3f020e1a2218
|
|
Step 3/22 : RUN apt update && apt install -yqq openjdk-11-jdk unzip
|
|
---> Using cache
|
|
---> b068481603f0
|
|
|
|
[...]
|
|
|
|
Step 22/22 : CMD ["/usr/local/bin/logtail.sh"]
|
|
---> Using cache
|
|
---> 5307f28ff1a0
|
|
Successfully built 5307f28ff1a0
|
|
Successfully tagged lnbook/eclair:latest
|
|
|
|
----
|
|
|
|
Our container is now built and ready to run. The Eclair container also depends on a running instance of Bitcoin Core. As before, we need to start the bitcoind container in another terminal and connect Eclair to it via a docker network. We have already set up a docker network called +lnbook+ and will be reusing it here.
|
|
|
|
One notable difference between Eclair and LND or c-lightning is that Eclair doesn't contain a separate bitcoin wallet but instead relies directly on the bitcoin wallet in Bitcoin Core. Recall that using LND we "funded" its bitcoin wallet by executing a transaction to transfer bitcoin from Bitcoin Core's wallet to LND's bitcoin wallet. This step is not necessary using Eclair. When running Eclair, the Bitcoin Core wallet is used directly as the source of funds to open channels. As a result, unlike the LND or c-lightning containers, the Eclair container does not contain a script to transfer bitcoin into its wallet on startup.
|
|
|
|
==== Running the bitcoind and Eclair containers
|
|
|
|
As before, we start the bitcoind container in one terminal and the Eclair container in another. If you already have the bitcoind container running, you do not need to restart it. Just leave it running and skip the next step. To start +bitcoind+ in the +lnbook+ network, we use +docker run+ like this:
|
|
|
|
----
|
|
$ docker run -it --network lnbook --name bitcoind lnbook/bitcoind
|
|
----
|
|
|
|
Next, we start the Eclair container we just built. We will need to attach it to the +lnbook+ network and give it a name, just as we did with the other containers:
|
|
|
|
----
|
|
$ docker run -it --network lnbook --name eclair lnbook/eclair
|
|
Waiting for bitcoind to start...
|
|
Waiting for bitcoind to mine blocks...
|
|
Starting eclair...
|
|
Eclair node started
|
|
/usr/local/bin/logtail.sh
|
|
INFO fr.acinq.eclair.Plugin$ - loading 0 plugins
|
|
INFO a.e.slf4j.Slf4jLogger - Slf4jLogger started
|
|
INFO fr.acinq.eclair.Setup - hello!
|
|
INFO fr.acinq.eclair.Setup - version=0.4 commit=69c538e
|
|
[...]
|
|
|
|
----
|
|
|
|
The Eclair container starts up and connects to the bitcoind container over the docker network. First, our Eclair node will wait for bitcoind to start and then it will wait until bitcoind has mined some bitcoin into its wallet.
|
|
|
|
As we demonstrated previously, we can issue commands to our container in another terminal in order to extract information, open channels etc. The command that allows us to issue command-line instructions to the +eclair+ daemon is called +eclair-cli+. The +eclair-cli+ command expects a password which we have set to "eclair" in this container. We pass the password +eclair+ to the +eclair-cli+ command via the +p+ flag. Using the +docker exec+ command in another terminal window we get the node info from Eclair:
|
|
|
|
[source,bash]
|
|
----
|
|
$ docker exec eclair eclair-cli -p eclair getinfo
|
|
{
|
|
"version": "0.4-69c538e",
|
|
"nodeId": "03ca28ed39b412626dd5efc514add8916282a1360556f8101ed3f06eea43d6561a",
|
|
"alias": "eclair",
|
|
"color": "#49daaa",
|
|
"features": "0a8a",
|
|
"chainHash": "06226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f",
|
|
"blockHeight": 123,
|
|
"publicAddresses": []
|
|
}
|
|
|
|
----
|
|
|
|
We now have another Lightning node running on the +lnbook+ network and communicating with bitcoind. You can run any number and any combination of Lightning nodes on the same Lightning network. Any number of Eclair, LND, and c-lightning nodes can coexist. For example, to run a second Eclair node you would issue the +docker run+ command with a different container name as follows:
|
|
|
|
----
|
|
$ docker run -it --network lnbook --name eclair2 lnbook/eclair
|
|
----
|
|
|
|
In the above command we start another Eclair container named +eclair2+.
|
|
|
|
In the next section we will also look at how to download and compile Eclair directly from the source code. This is an optional and advanced step that will teach you how to use the Scala and Java language build tools and allow you to make modifications to Eclair's source code. With this knowledge, you can write some code or fix some bugs. If you are not planning on diving into the source code or programming of a Lightning node, you can skip the next section entirely. The docker container we just built is sufficient for most of the examples in the book.
|
|
|
|
==== Installing Eclair from source code
|
|
|
|
In this section we will build Eclair from scratch. Eclair is written in the Scala programming language which is compiled using the Java compiler. To run Eclair, we first need to install Java and its build tools. We will be following the instructions found in the BUILD.md document of the Eclair project:
|
|
|
|
https://github.com/ACINQ/eclair/blob/master/BUILD.md
|
|
|
|
The required Java compiler is part of OpenJDK 11. We will also need a build framework called Maven, version 3.6.0 or above.
|
|
|
|
On a Debian/Ubuntu Linux system we can use the +apt+ command to install both OpenJDK11 and Maven as shown below:
|
|
|
|
----
|
|
$ sudo apt install openjdk-11-jdk maven
|
|
----
|
|
|
|
Verify that you have the correct version installed by running:
|
|
|
|
[source,bash]
|
|
----
|
|
$ javac -version
|
|
javac 11.0.7
|
|
$ mvn -v
|
|
Apache Maven 3.6.1
|
|
Maven home: /usr/share/maven
|
|
Java version: 11.0.7, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64
|
|
|
|
----
|
|
|
|
We have OpenJDK 11.0.7 and Maven 3.6.1, so we're ready.
|
|
|
|
==== Copying the Eclair source code
|
|
|
|
The source code for Eclair is on Github. The +git clone+ command can create a local copy for us. Let's change to our home directory and run it there:
|
|
|
|
----
|
|
$ cd ~
|
|
$ git clone https://github.com/ACINQ/eclair.git
|
|
|
|
----
|
|
|
|
Once +git clone+ finishes you will have a sub-directory +eclair+ containing the source code for the Eclair server.
|
|
|
|
==== Compiling the Eclair source code
|
|
|
|
Eclair uses the +Maven+ build system. To build the project we change the working directory to Eclair's source code and then use +mvn package+ like this:
|
|
|
|
----
|
|
$ cd eclair
|
|
$ mvn package
|
|
[INFO] Scanning for projects...
|
|
[INFO] ------------------------------------------------------------------------
|
|
[INFO] Reactor Build Order:
|
|
[INFO]
|
|
[INFO] eclair_2.13 [pom]
|
|
[INFO] eclair-core_2.13 [jar]
|
|
[INFO] eclair-node [jar]
|
|
[INFO] eclair-node-gui [jar]
|
|
[INFO]
|
|
[INFO] --------------------< fr.acinq.eclair:eclair_2.13 >---------------------
|
|
[INFO] Building eclair_2.13 0.4.3-SNAPSHOT [1/4]
|
|
[INFO] --------------------------------[ pom ]---------------------------------
|
|
|
|
[...]
|
|
|
|
[INFO] eclair_2.13 ........................................ SUCCESS [ 3.032 s]
|
|
[INFO] eclair-core_2.13 ................................... SUCCESS [ 7.935 s]
|
|
[INFO] eclair-node ........................................ SUCCESS [ 35.127 s]
|
|
[INFO] eclair-node-gui .................................... SUCCESS [ 20.535 s]
|
|
[INFO] ------------------------------------------------------------------------
|
|
[INFO] BUILD SUCCESS
|
|
[INFO] ------------------------------------------------------------------------
|
|
[INFO] Total time: 01:06 min
|
|
[INFO] Finished at: 2020-12-12T09:43:21-04:00
|
|
[INFO] ------------------------------------------------------------------------
|
|
|
|
----
|
|
|
|
The build logs above contain "2.13", for building a commit around version 0.4.3, this is expected.
|
|
|
|
After several minutes the build of the Eclair package should complete. But the "package" action will also run tests, and some of them connect to the internet, which could fail. If you want to skip tests, add +-DskipTests+ to the command.
|
|
|
|
Now, unzip and run the built package, by following the instructions found here:
|
|
|
|
https://github.com/ACINQ/eclair#installing-eclair
|
|
|
|
Congratulations! You have built Eclair from source and you are ready to code, test, fix bugs, and contribute to this project!
|
|
|
|
=== Building a complete network of diverse Lightning Nodes
|
|
|
|
Our final example, presented in this section, will bring together all the various containers we have built to form a Lightning network made of diverse (LND, c-lightning, Eclair) node implementations. We will compose the network by connecting the nodes together and opening channels from one node to another. As the final step we route a payment across these channels.
|
|
|
|
In this example, we will replicate the Lighting network example from <<routing_on_a_network_of_payment_channels>>. Specifically, we will create four Lightning nodes named Alice, Bob, Chan, and Dina. We will connect Alice to Bob, Bob to Chan, and Chan to Dina. Finally, we will have Dina create an invoice and have Alice pay that invoice. Since Alice and Dina are not directly connected, the payment will be routed as an HTLC across all the payment channels.
|
|
|
|
==== Using docker-compose to orchestrate docker containers
|
|
|
|
To make this example work, we will be using a _container orchestration_ tool that is available as a command called +docker-compose+. This command allows us to specify an application composed of several containers and run the application by launching all the cooperating containers together.
|
|
|
|
First, let's install +docker-compose+. The instructions depend on your operating system and can be found here:
|
|
|
|
https://docs.docker.com/compose/install/
|
|
|
|
Once you have completed installation, you can verify your installation by running docker-compose like this:
|
|
|
|
----
|
|
$ docker-compose version
|
|
docker-compose version 1.21.0, build unknown
|
|
[...]
|
|
|
|
----
|
|
|
|
The most common +docker-compose+ commands we will use are +up+ and +down+, e.g. +docker-compose up+.
|
|
|
|
==== Docker-compose configuration
|
|
|
|
The configuration file for +docker-compose+ is found in the +code/docker+ directory and is named +docker-compose.yml+. It contains a specification for a network and each of the four containers. The top looks like this:
|
|
|
|
----
|
|
version: "3.3"
|
|
networks:
|
|
lnnet:
|
|
|
|
services:
|
|
bitcoind:
|
|
container_name: bitcoind
|
|
build:
|
|
context: bitcoind
|
|
image: lnbook/bitcoind:latest
|
|
networks:
|
|
- lnnet
|
|
expose:
|
|
- "18443"
|
|
- "12005"
|
|
- "12006"
|
|
|
|
Alice:
|
|
container_name: Alice
|
|
----
|
|
|
|
The fragment above defines a network called +lnnet+ and a container called +bitcoind+ which will attach to the +lnnet+ network. The container is the same one we built at the beginning of this chapter. We expose three of the container's ports allowing us to send commands to it and monitor blocks and transactions. Next, the configuration specifies an LND container called "Alice". Further down you will also see specifications for containers called "Bob" (c-lightning), "Chan" (Eclair) and "Dina" (LND again).
|
|
|
|
Since all these diverse implementations follow the Basis of Lightning Technologies (BOLT) specification and have been extensively tested for interoperability, they have no difficulty working together to build a Lightning network.
|
|
|
|
==== Starting the example Lightning network
|
|
|
|
Before we get started, we should make sure we're not already running any of the containers. If a new container shares the same name as one that is already running, then it will fail to launch. Use +docker ps+, +docker stop+, and +docker rm+ as necessary to stop and remove any currently running containers!
|
|
|
|
[TIP]
|
|
====
|
|
Because we use the same names for these orchestrated docker containers, we might need to "clean up" to avoid any name conflicts.
|
|
====
|
|
|
|
To start the example, we switch to the directory that contains the +docker-compose.yml+ configuration file and we issue the command +docker-compose up+:
|
|
|
|
----
|
|
$ cd code/docker
|
|
$ docker-compose up
|
|
Creating network "docker_lnnet" with the default driver
|
|
Creating Chan ... done
|
|
Creating Bob ... done
|
|
Creating Dina ... done
|
|
Creating Alice ... done
|
|
Creating bitcoind ... done
|
|
Attaching to Chan, Bob, Dina, Alice, bitcoind
|
|
Bob | Waiting for bitcoind to start...
|
|
Chan | Waiting for bitcoind to start...
|
|
Alice | Waiting for bitcoind to start...
|
|
Dina | Waiting for bitcoind to start...
|
|
bitcoind | Starting bitcoind...
|
|
|
|
[...]
|
|
----
|
|
|
|
Following the start up, you will see a whole stream of log files as each of the nodes starts up and reports its progress. It may look quite jumbled on your screen, but each output line is prefixed by the container name as seen above. If you wanted to watch the logs from only one container, you can do so in another terminal window by using the +docker-compose logs+ command with the +f+ (_follow_) flag and the specific container name:
|
|
|
|
----
|
|
$ docker-compose logs -f Alice
|
|
----
|
|
|
|
==== Opening channels and routing a payment
|
|
|
|
Our Lightning network should now be running. As we saw in the previous sections of this chapter, we can issue commands to a running docker container with the +docker exec+ command. Regardless of whether we started the container with +docker run+ or started a bunch of them with +docker-compose up+, we can still access containers individually using the docker commands.
|
|
|
|
To make things easier, we have a little helper script that sets up the network, issues the invoice and makes the payment. The script is called +setup-channels.sh+ and is a Bash shell script. Keep in mind that this script is not very sophisticated! It "blindly" throws commands at the various nodes and doesn't do any error checking. If the network is running correctly and the nodes are funded, then it all works nicely. However, you have to wait a bit for everything to boot up and for the network to mine a few blocks and settle down. This usually takes 1 to 3 minutes. Once you see the block height at 102 or above on each of the nodes, then you are ready. If the script fails, you can stop everything (+docker-compose down+) and try again from the beginning. Or you can manually issue the commands found in the Bash script one by one and look at the results.
|
|
|
|
[TIP]
|
|
====
|
|
Before running the +setup-channels.sh+ script note the following: Wait a minute or two after starting the network with +docker-compose+ to assure that all the services are running and all the wallets are funded. To keep things simple, the script doesn't check whether the containers are "ready". Be patient!
|
|
====
|
|
|
|
Let's run the script to see its effect and then we will look at how it works internally. We use +bash+ to run it as a command:
|
|
|
|
[source,bash]
|
|
----
|
|
$ cd code/docker
|
|
$ bash setup-channels.sh
|
|
Getting node IDs
|
|
Alice: 02c93da7a0a341d28e6d7742721a7d182f878e0c524e3666d80a58e1406d6d9391
|
|
Bob: 0343b8f8d27a02d6fe688e3596b2d0834c576672e8750106540617b6d5755c812b
|
|
Chan: 03e17cbc7b46d553bade8687310ee0726e40dfa2c629b8b85ca5d888257757edc1
|
|
Dina: 038c9dd0f0153cba3089616836936b2dad9ea7f97ef839f5fbca3a808d232db77b
|
|
|
|
Setting up channels...
|
|
Alice to Bob
|
|
|
|
Bob to Chan
|
|
|
|
Chan to Dina
|
|
|
|
Get 10k sats invoice from Dina
|
|
|
|
Dina invoice lnbcrt100u1p00w5sypp5fw2gk98v6s4s2ldfwxa6jay0yl3f90j82kv6xx97jfwpa3s964vqdqqcqzpgsp5jpasdchlcx85hzfp9v0zc7zqs9sa3vyasj3nm0t4rsufrl7xge6s9qy9qsqpdd5d640agrhqe907ueq8n8f5h2p42vpheuzen58g5fwz94jvvnrwsgzd89v70utn4d7k6uh2kvp866zjgl6g85cxj6wtvdn89hllvgpflrnex
|
|
|
|
Wait for channel establishment - 60 seconds for 6 blocks
|
|
|
|
----
|
|
|
|
As you can see from the output, the script first gets the node IDs (public keys) for each of the four nodes. Then, it connects the nodes and sets up a 1,000,000 satoshi channel from each node to the next in the network.
|
|
|
|
Looking inside the script, we see the part that gets all the node IDs and stores them in temporary variables so that they can be used in subsequent command. It looks like this:
|
|
|
|
[source,bash]
|
|
----
|
|
alice_address=$(docker-compose exec -T Alice bash -c "lncli -n regtest getinfo | jq .identity_pubkey")
|
|
bob_address=$(docker-compose exec -T Bob bash -c "lightning-cli getinfo | jq .id")
|
|
chan_address=$(docker-compose exec -T Chan bash -c "eclair-cli -s -j -p eclair getinfo| jq .nodeId")
|
|
dina_address=$(docker-compose exec -T Dina bash -c "lncli -n regtest getinfo | jq .identity_pubkey")
|
|
----
|
|
|
|
If you have followed the first part of the chapter you will recognise these commands and be able to "decipher" their meaning. It looks quite complex, but we will walk through it step-by-step and you'll quickly get the hang of it.
|
|
|
|
The first command sets up a variable called +alice_address+ that is the output of a +docker-compose exec+ command. The +T+ flag tells docker-compose to not open an interactive terminal. An interactive terminal may mess up the output with things like color-coding of results. The +exec+ command is directed to the +Alice+ container and runs the +lncli+ utility since +Alice+ is an LND node. The +lncli+ command must be told that it is operating on the +regtest+ network and will then issue the +getinfo+ command to LND. The output from +getinfo+ is a JSON-encoded object, which we can parse by piping the output to the +jq+ command. The +jq+ command selects the +identity_pubkey+ field from the JSON object. The contents of the +identity_pubkey+ field are then output and stored in +alice_address+.
|
|
|
|
The following three lines do the same for each of the other nodes. Because they are different node implementations (c-lightning, Eclair), their command-line interface is slightly different, but the general principle is the same: Use the command utility to ask the node for its public key (node ID) information and parse it with +jq+, storing it in a variable for further use later.
|
|
|
|
Next, we tell each node to establish a network connection to the next node and open a channel:
|
|
|
|
[source,bash]
|
|
----
|
|
docker-compose exec -T Alice lncli -n regtest connect ${bob_address//\"}@Bob
|
|
docker-compose exec -T Alice lncli -n regtest openchannel ${bob_address//\"} 1000000
|
|
----
|
|
|
|
Both of the commands are directed to the +Alice+ container since the channel will be opened _from_ +Alice+ _to_ +Bob+, and +Alice+ will initiate the connection.
|
|
|
|
As you can see, in the first command we tell +Alice+ to connect to the node +Bob+. Its node ID is stored in +${bob_address}+ and its IP address can be resolved from the name +Bob+, hence +@Bob+ is used as the network identifier/address. We do not need to add the port number (9375) because we are using the default Lightning ports.
|
|
|
|
Now that +Alice+ is connected, we open a 1,000,000 satoshi channel to +Bob+ with the +openchannel+ command. Again, we refer to +Bob+'s node by the node ID, i.e. the public key.
|
|
|
|
We do the same with the other nodes, setting up connections and channels. Each node type has a slightly different syntax for these commands, but the overall principle is the same:
|
|
|
|
To Bob's node (c-lightning) we send these commands:
|
|
[source,bash]
|
|
----
|
|
docker-compose exec -T Bob lightning-cli connect ${chan_address//\"}@Chan
|
|
docker-compose exec -T Bob lightning-cli fundchannel ${chan_address//\"} 1000000
|
|
----
|
|
|
|
To Chan's node (Eclair) we send:
|
|
[source,bash]
|
|
----
|
|
docker-compose exec -T Chan eclair-cli -p eclair connect --uri=${dina_address//\"}@Dina
|
|
docker-compose exec -T Chan eclair-cli -p eclair open --nodeId=${dina_address//\"} --fundingSatoshis=1000000
|
|
----
|
|
|
|
At this point we create a new invoice for 10,000 satoshis on Dina's node:
|
|
|
|
[source,bash]
|
|
----
|
|
dina_invoice=$(docker-compose exec -T Dina lncli -n regtest addinvoice 10000 | jq .payment_request)
|
|
----
|
|
|
|
The +addinvoice+ command creates an invoice for the specified amount in satoshis and produces a JSON object with the invoice details. From that JSON object we only need the actual bech32-encoded payment request, which we extract with +jq+.
|
|
|
|
Next, we have to wait. We just created a bunch of channels. Hence, our nodes broadcast several funding transactions. The channels can't be used until the funding transactions are mined and collect 6 confirmations. Since our Bitcoin +regtest+ blockchain is set to mine blocks every ten seconds, we have to wait 60 seconds for all the channels to be ready to use.
|
|
|
|
The final command is the actual invoice payment. We connect to Alice's node and present Dina's invoice for payment.
|
|
|
|
[source,bash]
|
|
----
|
|
docker-compose exec -T Alice lncli -n regtest payinvoice --json --inflight_updates -f ${dina_invoice//\"}
|
|
----
|
|
|
|
We ask Alice's node to pay the invoice, but also ask for +inflight_updates+ in +json+ format. That will give us detailed output about the invoice, the route, the HTLCs, and the final payment result. We can study this additional output and learn from it!
|
|
|
|
Since Alice's node doesn't have a direct channel to Dina, her node has to find a route. There is only one viable route here (Alice->Bob->Chan->Dina), which Alice will be able to discover now that all the channels are active and have been advertised to all the nodes by the Lightning gossip protocol. Alice's node will construct the route and create an onion packet to establish HTLCs across the channels. All of this happens in a fraction of a second and Alice's node will report the result of the payment attempt. If all goes well, you will see the last line of the JSON output showing:
|
|
|
|
----
|
|
"failure_reason": "FAILURE_REASON_NONE"
|
|
----
|
|
|
|
This is arguably a weird message, but the fact that there was no failure reason, in a round-about way, implies that the operation was a success!
|
|
|
|
Scrolling above that unusual message you will see all the details of the payment. There is a lot to review, but as you gain understanding of the underlying technology, more and more of that information will become clear. You are invited to revisit this example later.
|
|
|
|
Of course, you can do a lot more with this test network than a 3-channel, 4-node payment. Here are some ideas for your experiments:
|
|
|
|
* Create a more complex network by launching many more nodes of different types. Edit the +docker-compose.yml+ file and copy sections, renaming containers as needed.
|
|
|
|
* Connect the nodes in more complex topologies: circular routes, hub-and-spoke, or full mesh.
|
|
|
|
* Run lots of payments to exhaust channel capacity. Then run payments in the opposite direction to rebalance the channels. See how the routing algorithm adapts.
|
|
|
|
* Change the channel fees to see how the routing algorithm negotiates multiple routes and what optimizations it applies. Is a cheap, long route better than an expensive, short route?
|
|
|
|
* Run a circular payment from a node back to itself in order to rebalance its own channels. See how that affects all the other channels and nodes.
|
|
|
|
* Generate hundreds or thousands of small invoices in a loop and then pay them as fast as possible in another loop. Measure how many transactions per second you can squeeze out of this test network.
|
|
|
|
=== Conclusion
|
|
|
|
In this chapter we looked at various projects that implement the BOLT specifications. We built containers to run a sample Lightning network and learned how to build each project from source code. You are now ready to explore further and dig deeper.
|