Merge branch 'develop'
@ -1 +1,9 @@
|
||||
*.pdf
|
||||
*.html
|
||||
!author_bio.html
|
||||
!colo.html
|
||||
!copyright.html
|
||||
!cover.html
|
||||
!ix.html
|
||||
!titlepage.html
|
||||
!toc.html
|
||||
|
@ -0,0 +1,417 @@
|
||||
[[wire_protocol]]
|
||||
== Wire Protocol: Framing & Extensibility
|
||||
|
||||
In this chapter, we dive into the wire protocol of the Lightning network,
|
||||
and also cover all the various extensibility levers that have been built into
|
||||
the protocol. By the end of this chapter, and aspiring reader should be able to
|
||||
write their very own wire protocol parser for the Lighting Network. In addition
|
||||
to being able to write a custom wire protocol parser, a reader of this chapter
|
||||
will gain a deep understanding with respect of the various upgrade mechanisms
|
||||
that have been built into the protocol.
|
||||
|
||||
=== Messaging Layer in the Lightning protocol suite
|
||||
|
||||
The messaging layer, which is detailed in this chapter, consists of _Message Framing and Format_, _Type Length Value (TLV)_ encoding, and _Feature Bits_. These components are highlighted by a double outline in the protocol suite, shown in <<LN_protocol_wire_message_highlight>>:
|
||||
|
||||
[[LN_protocol_wire_message_highlight]]
|
||||
.The Lightning Network Protocol Suite
|
||||
image::images/LN_protocol_wire_message_highlight.png["The Lightning Network Protocol Suite"]
|
||||
|
||||
=== Wire Framing
|
||||
|
||||
We begin by describing the high level structure of the wire _framing_
|
||||
within the protocol. When we say framing, we mean the way that the bytes are
|
||||
packed on the wire to _encode_ a particular protocol message. Without knowledge
|
||||
of the framing system used in the protocol, a string of bytes on the wire would
|
||||
resemble a series of random bytes as no structure has been imposed. By applying
|
||||
proper framing to decode these bytes on the wire, we'll be able to extract
|
||||
structure and finally parse this structure into protocol messages within our
|
||||
higher-level language.
|
||||
|
||||
It's important to note that as the Lightning Network is an _end to end
|
||||
encrypted_ protocol, and the wire framing is itself encapsulated within an
|
||||
_encrypted_ message transport layer. As we see in <<encrypted_message_transport>> the Lighting
|
||||
Network uses a custom variant of the Noise protocol to handle
|
||||
transport encryption. Within this chapter, whenever we give an example of wire
|
||||
framing, we assume the encryption layer has already been stripped away (when
|
||||
decoding), or that we haven't yet encrypted the set of bytes before we send
|
||||
them on the wire (encoding).
|
||||
|
||||
==== High-Level Wire Framing
|
||||
|
||||
With that said, we're ready to describe the high-level schema used to
|
||||
encode messages on the wire:
|
||||
|
||||
* Messages on the wire begin with a _2 byte_ type field, followed by a
|
||||
message payload.
|
||||
* The message payload itself, can be up to 65 KB in size.
|
||||
* All integers are encoded in big-endian (network order).
|
||||
* Any bytes that follow after a defined message can be safely ignored.
|
||||
|
||||
Yep, that's it. As the protocol relies on an _encapsulating_ transport protocol
|
||||
encryption layer, we don't need an explicit length for each message type. This
|
||||
is due to the fact that transport encryption works at the _message_ level, so
|
||||
by the time we're ready to decode the next message, we already know the total
|
||||
number of bytes of the message itself. Using 2 bytes for the message type
|
||||
(encoded in big-endian) means that the protocol can have up to `2^16 - 1` or
|
||||
`65535` distinct messages. Continuing, as we know all messages _MUST_ be below
|
||||
65KB, this simplifies our parsing as we can use a _fixed_ sized buffer, and
|
||||
maintain strong bounds on the total amount of memory required to parse an
|
||||
incoming wire message.
|
||||
|
||||
The final bullet point allows for a degree of _backwards_ compatibility, as new
|
||||
nodes are able to provide information in the wire messages that older nodes
|
||||
(which may not understand them can safely ignore). As we see below, this
|
||||
feature combined with a very flexible wire message extensibility format also
|
||||
allows the protocol to achieve _forwards_ compatibility as well.
|
||||
|
||||
==== Type Encoding
|
||||
|
||||
With this high level background provided, we now start at the most primitive
|
||||
layer: parsing primitive types. In addition to encoding integers, the Lightning
|
||||
Protocol also allows for encoding of a vast array of types including: variable
|
||||
length byte slices, elliptic curve public keys, Bitcoin addresses, and
|
||||
signatures. When we describe the _structure_ of wire messages later in this
|
||||
chapter, we refer to the high-level type (the abstract type) rather than the
|
||||
lower level representation of said type. In this section, we peel back this
|
||||
abstraction layer to ensure our future wire parser is able to properly
|
||||
encoding/decode any of the higher level types.
|
||||
|
||||
In <<message_types>>, we map the name of a given message type to the
|
||||
high-level routine used to encode/decode the type.
|
||||
|
||||
[[message_types]]
|
||||
.High-level message types
|
||||
[options="header"]
|
||||
|===
|
||||
| High Level Type | Framing | Comment
|
||||
| `node_alias` | A 32-byte fixed-length byte slice. | When decoding, reject if contents are not a valid UTF-8 string.
|
||||
| `channel_id` | A 32-byte fixed-length byte slice that maps an outpoint to a 32 byte value. | Given an outpoint, one can convert it to a `channel_id` by taking the txid of the outpoint and XOR'ing it with the index (interpreted as the lower 2 bytes).
|
||||
| `short_chan_id` | An unsigned 64-bit integer (`uint64`) | Composed of the block height (24 bits), transaction index (24 bits), and output index (16 bits) packed into 8 bytes.
|
||||
| `milli_satoshi` | An unsigned 64-bit integer (`uint64`) | Represents 1000th of a satoshi.
|
||||
| `satoshi` | An unsigned 64-bit integer (`uint64`) | The based unit of bitcoin.
|
||||
| `satoshi` | An unsigned 64-bit integer (`uint64`) | The based unit of bitcoin.
|
||||
| `pubkey` | An secp256k1 public key encoded in _compressed_ format, occupying 33 bytes. | Occupies a fixed 33-byte length on the wire.
|
||||
| `sig` | An ECDSA signature of the secp256k1 Elliptic Curve. | Encoded as a _fixed_ 64-byte byte slice, packed as `R \|\| S`
|
||||
| `uint8` | An 8-bit integer. |
|
||||
| `uint16` | A 16-bit integer. |
|
||||
| `uint64` | A 64-bit integer. |
|
||||
| `[]byte` | A variable length byte slice. | Prefixed with a 16-bit integer denoting the length of the bytes.
|
||||
| `color_rgb` | RGB color encoding. | Encoded as a series if 8-bit integers.
|
||||
| `net_addr` | The encoding of a network address. | Encoded with a 1 byte prefix that denotes the type of address, followed by the address body.
|
||||
|===
|
||||
|
||||
In the next section, we describe the structure of each of the wire messages
|
||||
including the prefix type of the message along with the contents of its message
|
||||
body.
|
||||
|
||||
[[tlv_message_extensions]]
|
||||
=== Type Length Value (TLV) Message Extensions
|
||||
|
||||
Earlier in this chapter we mentioned that messages can be up to 65 KB in size,
|
||||
and if while parsing a messages, extra bytes are left over, then those bytes
|
||||
are to be ignored. At an initial glance, this requirement may appear to be
|
||||
somewhat arbitrary, however this requirement allows for de-coupled de-synchronized evolution of the Lighting
|
||||
Protocol itself. We discuss this more towards the end of the chapter. But first, we turn our attention to exactly what those "extra bytes" at
|
||||
the end of a message can be used for.
|
||||
|
||||
==== The Protocol Buffer Message Format
|
||||
|
||||
The Protocol Buffer (Protobuf) message serialization format started out as an
|
||||
internal format used at Google, and has blossomed into one of the most popular
|
||||
message serialization formats used by developers globally. The Protobuf format
|
||||
describes how a message (usually some sort of data structure related to an API)
|
||||
is encoded on the wire and decoded on the other end. Several "Protobuf
|
||||
compilers" exists in dozens of languages which act as a bridge that allows any
|
||||
language to encode a Protobuf that will be able to decode by a compliant decode
|
||||
in another language. Such cross language data structure compatibility allows
|
||||
for a wide range of innovation, because it's possible to transmit structure and even
|
||||
typed data structures across language and abstraction boundaries.
|
||||
|
||||
Protobufs are also known for their flexibility with respect to how they
|
||||
handle changes in the underlying messages structure. As long as the field
|
||||
numbering schema is adhered to, then it's possible for a newer write of
|
||||
Protobufs to include information within a Protobuf that may be unknown to any
|
||||
older readers. When the old reader encounters the new serialized format, if
|
||||
there're types/fields that it doesn't understand, then it simply _ignores_
|
||||
them. This allows old clients and new clients to co-exist, as all clients can
|
||||
parse some portion of the newer message format.
|
||||
|
||||
==== Forwards & Backwards Compatibility
|
||||
|
||||
Protobufs are extremely popular amongst developers as they have built in
|
||||
support for both forwards and backwards compatibility. Most developers are
|
||||
likely familiar with the concept of backwards computability. In simple terms,
|
||||
the principles states that any changes to a message format or API should be
|
||||
done in a manner that doesn't break support for older clients. Within our
|
||||
Protobuf extensibility examples above, backwards computability is achieved by
|
||||
ensuring that new additions to the proto format don't break the known portions
|
||||
of older readers. Forwards computability on the other hand is just as important
|
||||
for de-synchronized updates however it's less commonly known. For a change to
|
||||
be forwards compatible, then clients are to simply ignore any information
|
||||
they don't understand. The soft for mechanism of upgrading the Bitcoin
|
||||
consensus system can be said to be both forwards and backwards compatible: any
|
||||
clients that don't update can still use Bitcoin, and if they encounters any
|
||||
transactions they don't understand, then they simply ignore them as their funds
|
||||
aren't using those new features.
|
||||
|
||||
[[tlv]]
|
||||
=== Type-Length-Value (TLV) Format
|
||||
|
||||
In order to be able to upgrade messages in both a forwards and backwards
|
||||
compatible manner, in addition to feature bits (more on that later), the LN
|
||||
utilizes a custom message serialization format plainly called: Type Length
|
||||
Value, or TLV for short. The format was inspired by the widely used Protobuf
|
||||
format and borrows many concepts by significantly simplifying the
|
||||
implementation as well as the software that interacts with message parsing. A
|
||||
curious reader might ask "why not just use Protobufs"? In response, the
|
||||
Lighting developers would respond that we're able to have the best of the
|
||||
extensibility of Protobufs while also having the benefit of a smaller
|
||||
implementation and thus smaller attack. As of version v3.15.6, the Protobuf
|
||||
compiler weighs in at over 656,671 lines of code. In comparison LND's
|
||||
implementation of the TLV message format weighs in at only 2.3k lines of code
|
||||
(including tests).
|
||||
|
||||
With the necessary background presented, we're now ready to describe the TLV
|
||||
format in detail. A TLV message extension is said to be a stream of
|
||||
individual TLV records. A single TLV record has three components: the type of
|
||||
the record, the length of the record, and finally the opaque value of the
|
||||
record:
|
||||
|
||||
* `type`: An integer representing the name of the record being encoded.
|
||||
* `length`: The length of the record.
|
||||
* `value`: The opaque value of the record.
|
||||
|
||||
Both the `type` and `length` are encoded using a variable sized integer that's inspired by the variable sized integer (varint) used in Bitcoin's p2p protocol, called `BigSize` for short.
|
||||
|
||||
==== BigSize Integer Encoding
|
||||
|
||||
In its fullest form, a `BigSize`
|
||||
integer can represent value up to 64-bits. In contrast to Bitcoin's varint
|
||||
format, the `BigSize` format instead encodes integers using a big-endian byte
|
||||
ordering.
|
||||
|
||||
The `BigSize` varint has the components: the discriminant and the body. In the
|
||||
context of the `BigSize` integer, the discriminant communicates to the decoder
|
||||
the size of the variable size integer that follows. Remember that the unique thing about
|
||||
variable sized integers is that they allow a parser to use fewer bytes to encode
|
||||
smaller integers than larger ones, saving space. Encoding of a `BigSize`
|
||||
integer follows one of the four following options:
|
||||
|
||||
1. If the value is less than `0xfd` (`253`): Then the discriminant isn't really used, and the encoding is simply the integer itself. This allows us to encode very small integers with no additional overhead.
|
||||
|
||||
2. If the value is less than or equal to `0xffff` (`65535`):The discriminant is encoded as `0xfd`, which indicates that the value that follows is larger than `0xfd`, but smaller than `0xffff`). The number is then encoded as a 16-bit integer. Including the discriminant, then we can encode a value that is greater than 253, but less than 65535 using 3 bytes.
|
||||
|
||||
3. If the value is less than `0xffffffff` (`4294967295`): The discriminant is encoded as `0xfe`. The body is encoded using 32-bit integer, Including the discriminant, then we can encode a value that's less than `4,294,967,295` using 5 bytes.
|
||||
|
||||
4. Otherwise, we just encode the value as a full-size 64-bit integer.
|
||||
|
||||
|
||||
==== TLV encoding constraints
|
||||
|
||||
Within the context of a TLV message, record types below `2^16` are said to be _reserved_ for future use. Types beyond this
|
||||
range are to be used for "custom" message extensions used by higher-level application protocols.
|
||||
|
||||
The `value` of a record depends on the `type`. In other words, it can take any form as parsers will attempt to interpret it depending on the context of the type itself.
|
||||
|
||||
==== TLV canonical encoding
|
||||
|
||||
One issue with the Protobuf format is that encodings of the same message may
|
||||
output an entirely different set of bytes when encoded by two different
|
||||
versions of the compiler. Such instances of a non-canonical encoding are not
|
||||
acceptable within the context of Lighting, as many messages contain a
|
||||
signature of the message digest. If it's possible for a message to be encoded
|
||||
in two different ways, then it would be possible to break the authentication of
|
||||
a signature inadvertently by re-encoding a message using a slightly different
|
||||
set of bytes on the wire.
|
||||
|
||||
In order to ensure that all encoded messages are canonical, the following
|
||||
constraints are defined when encoding:
|
||||
|
||||
* All records within a TLV stream MUST be encoded in order of strictly
|
||||
increasing type.
|
||||
|
||||
* All records must minimally encode the `type` and `length` fields. In other words, the smallest `BigSize` representation for an integer MUST be used at all times.
|
||||
|
||||
* Each `type` may only appear once within a given TLV stream.
|
||||
|
||||
In addition to these encoding constraints, a series of higher-level
|
||||
interpretation requirements are also defined based on the _arity_ of a given `type` integer. we dive further into these details towards the end of the
|
||||
chapter once we describe how the Lighting Protocol is upgraded in practice and
|
||||
in theory.
|
||||
|
||||
[[feature_bits]]
|
||||
=== Feature Bits & Protocol Extensibility
|
||||
|
||||
As the Lighting Network is a decentralized system, no single entity can enforce a
|
||||
protocol change or modification upon all the users of the system. This
|
||||
characteristic is also seen in other decentralized networks such as Bitcoin.
|
||||
However, unlike Bitcoin overwhelming consensus *is not* required to change a
|
||||
subset of the Lightning Network. Lighting is able to evolve at will without a
|
||||
strong requirement of coordination, as unlike Bitcoin, there is no global consensus required in the Lightning Network. Due to this fact and the several
|
||||
upgrade mechanisms embedded in the Lighting Network, only the
|
||||
participants that wish to use these new Lighting Network features need to
|
||||
upgrade, and then they are able to interact with each other.
|
||||
|
||||
In this section, we explore the various ways that developers and users are
|
||||
able to design and deploy new features to the Lightning Network. The
|
||||
designers of the original Lightning Network knew that there were many possible future directions for the network and the underlying protocol. As a result, they made sure to implement several
|
||||
extensibility mechanisms within the system, which can be used to upgrade it partially or fully in a decoupled, desynchronized, and decentralized
|
||||
manner.
|
||||
|
||||
==== Feature Bits as an Upgrade Discoverability Mechanism
|
||||
|
||||
An astute reader may have noticed the various locations that "feature bits" are
|
||||
included within the Lightning Protocol. A "feature bit" is a bitfield that can
|
||||
be used to advertise understanding or adherence to a possible network protocol
|
||||
update. Feature bits are commonly assigned in pairs, meaning that each
|
||||
potential new feature/upgrade always defines two bits within the bitfield.
|
||||
One bit signals that the advertised feature is _optional_, meaning that the
|
||||
node knows about the feature and can use it, but doesn't
|
||||
consider it required for normal operation. The other bit signals that the
|
||||
feature is instead *required*, meaning that the node will not continue
|
||||
operation if a prospective peer doesn't understand that feature.
|
||||
|
||||
Using these two bits (optional and required), we can construct a simple
|
||||
compatibility matrix that nodes/users can consult in order to determine if a peer is compatible with a desired feature:
|
||||
|
||||
.Feature Bit Compatibility Matrix
|
||||
[options="header"]
|
||||
|===
|
||||
|Bit Type|Remote Optional|Remote Required|Remote Unknown
|
||||
|Local Optional|✅|✅|✅
|
||||
|Local Required|✅|✅|❌
|
||||
|Local Unknown|✅|❌|❌
|
||||
|===
|
||||
|
||||
From this simplified compatibility matrix, we can see that as long as the other
|
||||
party knows about our feature bit, then we can interact with them using the
|
||||
protocol. If the party doesn't even know about what bit we're referring to
|
||||
*and* they require the feature, then we are incompatible with them. Within the
|
||||
network, optional features are signaled using an _odd bit number_ while
|
||||
required feature are signaled using an _even bit number_. As an example, if a peer signals that they known of a feature that uses bit +15+, then we know that
|
||||
this is an optional feature, and we can interact with them or respond to
|
||||
their messages even if we don't know about the feature. If
|
||||
they instead signaled the feature using bit +16+, then we know this is a
|
||||
required feature, and we can't interact with them unless our node also
|
||||
understands that feature.
|
||||
|
||||
The Lighting developers have come up with an easy to remember phrase that
|
||||
encodes this matrix: "it's OK to be odd". This simple rule allows for a
|
||||
rich set of interactions within the protocol, as a simple bitmask operation
|
||||
between two feature bit vectors allows peers to determine if certain
|
||||
interactions are compatible with each other or not. In other words, feature
|
||||
bits are used as an upgrade discoverability mechanism: they easily allow to
|
||||
peers to understand if they are compatible or not based on the concepts of
|
||||
optional, required, and unknown feature bits.
|
||||
|
||||
Feature bits are found in the: `node_announcement`, `channel_announcement`, and
|
||||
`init` messages within the protocol. As a result, these three messages can be
|
||||
used to signal the knowledge and/or understanding of in-flight protocol
|
||||
updates within the network. The feature bits found in the `node_announcement`
|
||||
message can allow a peer to determine if their _connections_ are compatible or
|
||||
not. The feature bits within the `channel_announcement` messages allows a peer
|
||||
to determine if a given payment type or HTLC can transit through a given peer or
|
||||
not. The feature bits within the `init` message allow peers to understand if
|
||||
they can maintain a connection, and also which features are negotiated for the
|
||||
lifetime of a given connection.
|
||||
|
||||
==== TLV for forwards & backwards compatibility
|
||||
|
||||
As we learned earlier in the chapter, Type-Length-Value, or TLV records can be
|
||||
used to extend messages in a forwards and backwards compatible manner.
|
||||
Overtime, these records have been used to extend existing messages without
|
||||
breaking the protocol by utilizing the "undefined" area within a message beyond
|
||||
that set of known bytes.
|
||||
|
||||
As an example, the original Lighting Protocol didn't have a concept of the
|
||||
"largest amount HTLC" that could traverse through a channel as dictated by a routing
|
||||
policy. Later on, the `max_htlc` field was added to the `channel_update`
|
||||
message to phase-in this concept over time. Peers that received a
|
||||
`channel_update` that set such a field but didn't even know the upgrade existed
|
||||
where unaffected by the change, but have their HTLCs rejected if they are
|
||||
beyond the limit. Newer peers, on the other hand, are able to parse, verify,
|
||||
and utilize the new field.
|
||||
|
||||
Those familiar with the concept of soft-forks in Bitcoin may now see some
|
||||
similarities between the two mechanisms. Unlike Bitcoin consensus-level
|
||||
soft-forks, upgrades to the Lighting Network don't require overwhelming
|
||||
consensus in order to adopt. Instead, at minimum, only two peers within the
|
||||
network need to understand a new upgrade in order to start using it. Commonly these two peers may be the recipient and sender of a
|
||||
payment, or may be the channel partners of a new payment channel.
|
||||
|
||||
==== A taxonomy of upgrade mechanisms
|
||||
|
||||
Rather than there being a single widely utilized upgrade mechanism within the
|
||||
network (such as soft-forks for Bitcoin), there exist several possible upgrade mechanisms within the Lighting Network. In this
|
||||
section, we enumerate these upgrade mechanisms, and
|
||||
provide a real-world example of their use in the past.
|
||||
|
||||
===== Internal Network Upgrades
|
||||
|
||||
We start with the upgrade type that requires the most protocol-level
|
||||
coordination: internal network upgrades. An internal network upgrade is
|
||||
characterized by one that requires *every single node* within a prospective payment path to understand the new feature. Such an upgrade is similar to any
|
||||
upgrade within the internet that requires hardware-level upgrades within
|
||||
the core-relay portion of the upgrade. In the context of LN however, we deal
|
||||
with pure software, so such upgrades are easier to deploy, yet they still
|
||||
require much more coordination than any other upgrade mechanism in the
|
||||
network.
|
||||
|
||||
One example of such an upgrade within the network was the introduction of a TLV
|
||||
encoding for the routing information encoded within the onion
|
||||
packets. The prior format used a hard-coded fixed-length message
|
||||
format to communicate information such as the next hop.
|
||||
As this format was fixed it meant that new protocol-level upgrades weren't possible. The move to the more flexible TLV
|
||||
format meant that after this upgrade, any sort of feature that
|
||||
modified the type of information communicated at each hop could be rolled out at will.
|
||||
|
||||
It's worth mentioning that the TLV onion upgrade was a sort of "soft" internal
|
||||
network upgrade, in that if a payment wasn't using any new feature beyond
|
||||
that new routing information encoding, then a payment could be transmitted
|
||||
using a mixed set of nodes.
|
||||
|
||||
===== End-to-End Upgrades
|
||||
|
||||
To contrast the internal network upgrade, in this section we describe the
|
||||
_end to end_ network upgrade. This upgrade mechanism differs from the internal
|
||||
network upgrade in that it only requires the "ends" of the payment, the sender
|
||||
and recipient to upgrade.
|
||||
|
||||
This type of upgrade allows
|
||||
for a wide array of unrestricted innovation within the network. Because of the
|
||||
onion encrypted nature of payments within the network, those forwarding HTLCs
|
||||
within the center of the network may not even know that new features are being
|
||||
utilized.
|
||||
|
||||
One example of an end-to-end upgrade within the network was the roll-out of multi-part payments (MPP). MPP is a protocol-level feature that enables a
|
||||
single payment to be split into multiple parts or paths, to be assembled at the
|
||||
recipient for settlement. The roll out our MPP was coupled with a new
|
||||
`node_announcement` level feature bit that indicates that the recipient knows
|
||||
how to handle partial payments. Assuming a sender and recipient know about each
|
||||
other (possibly via a BOLT 11 invoice), then they're able to use the new
|
||||
feature without any further negotiation.
|
||||
|
||||
Another example of an end-to-end upgrade are the various types of
|
||||
"spontaneous" payments deployed within the network. One early type of
|
||||
spontaneous payments called _keysend_ worked by simply placing the pre-image of a payment within the encrypted onion. Upon receipt, the destination would decrypt the
|
||||
pre-image, then use that to settle the payment. As the entire packet is end-to-end encrypted, this payment type was safe, since none of the intermediate nodes
|
||||
are able to fully unwrap the onion to uncover the payment pre-image.
|
||||
|
||||
==== Channel Construction Level Updates
|
||||
|
||||
The final broad category of updates are those that happen at
|
||||
the channel construction level, but which don't modify the structure of the HTLC used widely within the network. When we say channel construction, we mean
|
||||
how the channel is funded or created. As an example, the eltoo channel type
|
||||
can be rolled out within the network using a new `node_announcement` level
|
||||
feature bit as well as a `channel_announcement` level feature bit. Only the two
|
||||
peers on the sides of the channels needs to understand and advertise these new
|
||||
features. This channel pair can then be used to forward any payment type
|
||||
granted the channel supports it.
|
||||
|
||||
Another is the "anchor outputs" channel format which allows the commitment fee to be
|
||||
bumped via Bitcoin's Child-Pays-For-Parent (CPFP) fee management mechanism.
|
||||
|
||||
=== Conclusion
|
||||
|
||||
Lightning's wire protocol is incredibly flexible and allows for rapid innovation and interoperability without strict consensus. It is one of the reasons that the Lightning Network is experiencing much faster development and is attractive to many developers, who might otherwise find Bitcoin's development style too conservative and slow.
|
@ -0,0 +1,73 @@
|
||||
[[conclusion_chapter]]
|
||||
== Conclusion
|
||||
|
||||
In just a few years, the Lightning Network has gone from a whitepaper to a rapidly growing global network. As Bitcoin's second layer it has delivered on the promise of fast, inexpensive, and private payments. Additionally, it has started a tsunami of innovation, as it unleashes developers from the constraints of lock-step consensus that exist in Bitcoin development.
|
||||
|
||||
Innovation in the Lightning Network is happening in several different levels:
|
||||
|
||||
* At Bitcoin's Core protocol, providing use and demand for new Bitcoin Script opcodes, signing algorithms, and optimizations
|
||||
* At the Lightning protocol level with new features deployed rapidly network-wide.
|
||||
* At the payment channel level, with new channel constructs and enhancements
|
||||
* As distinct opt-in features deployed end-to-end by independent implementations that senders and recipients can use if they want.
|
||||
* With new and exciting Lightning Applications (LApps) build on top of the clients and protocols.
|
||||
|
||||
Let's look at how these innovations are changing Lightning now and in the near future.
|
||||
|
||||
=== Decentralized and asynchronous innovation
|
||||
|
||||
Lightning isn't bound by lock-step consensus, as is the case with Bitcoin. That means that different Lightning clients can implement different features and negotiate their interactions (see <<feature_bits>>). As a result, innovation in the Lightning Network is occurring at a much faster rate than in Bitcoin.
|
||||
|
||||
Not only is Lightning advancing rapidly, but it is creating demand for new features in the Bitcoin system. Many recent and planned innovations in Bitcoin are both motivated and justified by their use in the Lightning Network. In fact, the Lightning Network is often mentioned as an example use-case for many of the new features.
|
||||
|
||||
==== Bitcoin protocol and Bitcoin Script innovation
|
||||
|
||||
The Bitcoin system is, by necessity, a conservative system that has to preserve compatibility with consensus rules to avoid unplanned forks of the blockchain or partitions of the P2P network. As a result, new features require a lot of coordination and testing before they are implemented in "mainnet", the live production system.
|
||||
|
||||
Here are some of the current or proposed innovations spurred by Lightning:
|
||||
|
||||
Neutrino:: A lightweight client protocol with improved privacy features over the legacy SPV protocol. Neutrino is mostly used by Lightning clients to access the Bitcoin blockchain.
|
||||
|
||||
Schnorr signatures:: Introduced as part of the _Taproot_ soft fork, Schnorr signatures will enable flexible Point Time-Locked Contracts (PTLC) for channel construction in Lightning.
|
||||
|
||||
Taproot:: Also part of the November 2021 soft-fork that introduces Schnorr signatures, Taproot allows complex scripts to appear as single-payer, single-payee payments, and indistinguishable from the most common type of payment on Bitcoin. This will allow Lightning channel cooperative (mutual) closure transactions to appear indistinguishable from simple payments and increase privacy for LN users.
|
||||
|
||||
Input re-binding:: Also known by the names SIGHAS_NOINPUT, or SIGHASH_ANYPREVOUT, this planned upgrade to the Bitcoin Script language is primarily motivated by advanced smart contracts such as the eltoo channel protocol.
|
||||
|
||||
Covenants:: Currently in the early stages of research, coventants allow transactions to create outputs that constrain future transactions which spend them. This mechanism could increase security for Lightning channels, by making it possible to enforce address whitelisting in commitment transactions.
|
||||
|
||||
==== Lightning protocol innovation
|
||||
|
||||
The Lightning P2P protocol is highly extensible and has undergone a lot of change since its inception. The "It's OK to be odd" rule used in feature bits (see <<feature_bits>>) ensure that nodes can negotiate the features they support, enabling multiple independent upgrades to the protocol.
|
||||
|
||||
===== TLV extensibility
|
||||
|
||||
The Type-Length-Value (see <<tlv>>) mechanism for extending the messaging protocol is extremely powerful and has already enabled the introduction of several new capabilities in Lightning while maintaining both forward and backward compatibility.
|
||||
|
||||
==== Payment channel construction
|
||||
|
||||
Payment channels are an abstraction that is operated by two channel partners. As long as those two are willing to run new code, they can implement a variety of channel mechanisms simultaneously. In fact, recent research suggests that channels could even be upgraded to a new mechanism dynamically, without closing the old channel and opening a new channel type.
|
||||
|
||||
Eltoo:: A proposed channel mechanism that uses input-rebinding to significantly simplify the operation of payment channels and remove the need for the penalty mechanism. It needs a new Bitcoin signature type before it can be implemented
|
||||
|
||||
==== Opt-in end-to-end features
|
||||
|
||||
Point Time-Locked Contracts (PTLCs):: A different approach to HTLCs, PTLCs can increase privacy, reduce information leaked to intermediary nodes and operate more efficiently than HTLC-based channels.
|
||||
|
||||
Large channels:: Large or "Wumbo" channels were introduced in a dynamic way to the network without requiring coordination. Channels that support large payments are advertized as part of the channel announcement messages and can be used in an opt-in manner.
|
||||
|
||||
Multi-Part Payments (MPP):: MPP was also introduced in an opt-in manner, but even better only requires the sender and recipient of a payment to be able to do MPP. The rest of the network simply routes HTLCs as if they are single-part payments.
|
||||
|
||||
Keysend:: An upgrade introduced independently by Lightning client implementations, it allows the sender to send money in an "unsolicited" and asynchronous way without requiring an invoice first.
|
||||
|
||||
HODL invoices:: Payments where the final HTLC is not collected, committing the sender to the payment, but allowing the recipient to delay collection until some other condition is satisfied, or cancel the invoice without collection. This was also implemented independently by different Lightning clients and can be used opt-in.
|
||||
|
||||
Onion routed message services:: The onion routing mechanism, and the underlying public key databse of nodes can be used to send data that is unrelated to payments, such as text messages or forum posts. The use of Lightning to enable paid messaging as a solution to spam posts and sybil attacks (spam) is another innovation that was implemented independently of the core protocol.
|
||||
|
||||
[[lapps]]
|
||||
=== Lightning Applications (LApps)
|
||||
|
||||
While still in their infancy, we are already seeing the emergence of interesting Lightning Applications. Broadly defined as an application that uses the Lightning Protocol or a Lightning client as a component, LApps are the application layer of Lightning. LApps are being built for simple games, messaging applications, micro-services, payable-APIs, paid dispensers (eg. fuel pumps), derivative trading systems, and much more.
|
||||
|
||||
=== Ready, set, go!
|
||||
|
||||
The future is looking bright. The Lightning Network is taking Bitcoin to new unexplored markets and applications. Equipped with the knowledge in this book, you can explore this new frontier, or maybe even join as a pioneer and forge a new path.
|
@ -0,0 +1,113 @@
|
||||
[appendix]
|
||||
[[appendix_docker]]
|
||||
== Docker Basic Installation and Use
|
||||
|
||||
This book contains a number of examples that run inside docker containers, for standardization across different operating systems.
|
||||
|
||||
This section will help you install Docker and familiarize yourself with some of the most commonly used Docker commands, so that you can run the book's example containers.
|
||||
|
||||
|
||||
=== Installing Docker
|
||||
|
||||
Before we begin, you should install the Docker container system on your computer. Docker is an open system that is distributed for free as a _Community Edition_ for many different operating systems including Windows, Mac OS and Linux. The Windows and Mac versions are called _Docker Desktop_ and consist of a GUI desktop application and command-line tools. The Linux version is called _Docker Engine_ and is comprised of a server daemon and command-line tools. We will be using the command-line tools, which are identical across all platforms.
|
||||
|
||||
Go ahead and install Docker for your operating system by following the instructions to _"Get Docker"_ from the Docker website found here:
|
||||
|
||||
https://docs.docker.com/get-docker/
|
||||
|
||||
Select your operating system from the list and follow the installation instructions.
|
||||
|
||||
[TIP]
|
||||
====
|
||||
If you install on Linux, follow the post-installation instructions to ensure you can run Docker as a regular user instead of user _root_. Otherwise, you will need to prefix all +docker+ commands with +sudo+, running them as root like: +sudo docker+.
|
||||
====
|
||||
|
||||
Once you have Docker installed, you can test your installation by running the demo container +hello-world+ like this:
|
||||
|
||||
[docker-hello-world]
|
||||
----
|
||||
$ docker run hello-world
|
||||
|
||||
Hello from Docker!
|
||||
This message shows that your installation appears to be working correctly.
|
||||
|
||||
[...]
|
||||
----
|
||||
|
||||
=== Basic Docker commands
|
||||
|
||||
In this chapter, we use Docker quite extensively. We will be using the following Docker commands and arguments:
|
||||
|
||||
*Building a container*
|
||||
|
||||
----
|
||||
docker build [-t tag] [directory]
|
||||
----
|
||||
|
||||
...where +tag+ is how we identify the container we are building, and +directory+ is where the container's "context" (folders and files) and definition file (+Dockerfile+) are found.
|
||||
|
||||
*Running a container*
|
||||
|
||||
----
|
||||
docker run -it [--network netname] [--name cname] tag
|
||||
----
|
||||
|
||||
...where +netname+ is the name of a Docker network, +cname+ is the name we choose for this container instance and +tag+ is the name tag we gave the container when we built it.
|
||||
|
||||
*Executing a command in a container*
|
||||
|
||||
----
|
||||
docker exec cname command
|
||||
----
|
||||
|
||||
...where +cname+ is the name we gave the container in the +run+ command, and +command+ is an executable or script that we want to run inside the container.
|
||||
|
||||
*Stopping and starting a container*
|
||||
|
||||
In most cases, if we are running a container in an _interactive_ as well as _terminal_ mode, i.e. with the +i+ and +t+ flags (combined as +-it+) set, the container can be stopped by simply pressing +CTRL-C+ or by exiting the shell with +exit+ or +CTRL-D+. If a container does not terminate, you can stop it from another terminal like this:
|
||||
|
||||
----
|
||||
docker stop cname
|
||||
----
|
||||
|
||||
To resume an already existing container use the `start` command, like so:
|
||||
|
||||
----
|
||||
docker start cname
|
||||
----
|
||||
|
||||
*Deleting a container by name*
|
||||
|
||||
If you name a container instead of letting Docker name it randomly, you cannot reuse that name until the container is deleted. Docker will return an error like this:
|
||||
[source,bash]
|
||||
----
|
||||
docker: Error response from daemon: Conflict. The container name "/bitcoind" is already in use...
|
||||
----
|
||||
|
||||
To fix this, delete the existing instance of the container:
|
||||
|
||||
----
|
||||
docker rm cname
|
||||
----
|
||||
|
||||
...where +cname+ is the name assigned to the container (+bitcoind+ in the example error message)
|
||||
|
||||
*List running containers*
|
||||
|
||||
----
|
||||
docker ps
|
||||
----
|
||||
|
||||
...shows the current running containers and their names
|
||||
|
||||
*List docker images*
|
||||
|
||||
----
|
||||
docker image ls
|
||||
----
|
||||
|
||||
...shows the docker images that have been built or downloaded on your computer
|
||||
|
||||
=== Conclusion
|
||||
|
||||
These basic Docker commands will be enough to get you started and will allow you to run all the examples in this book.
|
@ -0,0 +1,727 @@
|
||||
[appendix]
|
||||
[[wire_protocol_enumeration]]
|
||||
[[protocol_messages]]
|
||||
[[messages]]
|
||||
== Wire Protocol Messages
|
||||
|
||||
This appendix lists all the currently defined message types used in the Lightning P2P protocol. Additionally, we show the structure of each message, grouping the messages into logical groupings based on the protocol flows.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Lightning Protocol messages are extensible and their structure may change during network-wide upgrades. For the authoritative information, consult the latest version of the BOLTs found in the https://github.com/lightningnetwork/lightning-rfc[Github - Lightning-RFC] repository.
|
||||
====
|
||||
|
||||
=== Message Types
|
||||
|
||||
Currently defined message types are:
|
||||
|
||||
[[apdx_message_types]]
|
||||
.Message Types
|
||||
[options="header"]
|
||||
|===
|
||||
| Type Integer | Message Name | Category
|
||||
| 16 | `init` | Connection Establishment
|
||||
| 17 | `error` | Error Communication
|
||||
| 18 | `ping` | Connection Liveness
|
||||
| 19 | `pong` | Connection Liveness
|
||||
| 32 | `open_channel` | Channel Funding
|
||||
| 33 | `accept_channel` | Channel Funding
|
||||
| 34 | `funding_created` | Channel Funding
|
||||
| 35 | `funding_signed` | Channel Funding
|
||||
| 36 | `funding_locked` | Channel Funding + Channel Operation
|
||||
| 38 | `shutdown` | Channel Closing
|
||||
| 39 | `closing_signed` | Channel Closing
|
||||
| 128 | `update_add_htlc` | Channel Operation
|
||||
| 130 | `update_fulfill_hltc` | Channel Operation
|
||||
| 131 | `update_fail_htlc` | Channel Operation
|
||||
| 132 | `commit_sig` | Channel Operation
|
||||
| 133 | `revoke_and_ack` | Channel Operation
|
||||
| 134 | `update_fee` | Channel Operation
|
||||
| 135 | `update_fail_malformed_htlc` | Channel Operation
|
||||
| 136 | `channel_reestablish` | Channel Operation
|
||||
| 256 | `channel_announcement` | Channel Announcement
|
||||
| 257 | `node_announcement` | Channel Announcement
|
||||
| 258 | `channel_update` | Channel Announcement
|
||||
| 259 | `announce_signatures` | Channel Announcement
|
||||
| 261 | `query_short_chan_ids` | Channel Graph Syncing
|
||||
| 262 | `reply_short_chan_ids_end` | Channel Graph Syncing
|
||||
| 263 | `query_channel_range` | Channel Graph Syncing
|
||||
| 264 | `reply_channel_range` | Channel Graph Syncing
|
||||
| 265 | `gossip_timestamp_range` | Channel Graph Syncing
|
||||
|===
|
||||
|
||||
In <<message_types>>, the `Category` field allows us to quickly categorize a
|
||||
message based on its functionality within the protocol itself. At a high level,
|
||||
we place a message into one of 8 (non exhaustive) buckets including:
|
||||
|
||||
* *Connection Establishment*: Sent when a peer to peer connection is first
|
||||
established. Also used in order to negotiate the set of feature supported
|
||||
by a new connection.
|
||||
|
||||
* *Error Communication*: Used by peer to communicate the occurrence of
|
||||
protocol level errors to each other.
|
||||
|
||||
* *Connection Liveness*: Used by peers to check that a given transport
|
||||
connection is still live.
|
||||
|
||||
* *Channel Funding*: Used by peers to create a new payment channel. This
|
||||
process is also known as the channel funding process.
|
||||
|
||||
* *Channel Operation*: The act of updating a given channel off-chain. This
|
||||
includes sending and receiving payments, as well as forwarding payments
|
||||
within the network.
|
||||
|
||||
* *Channel Announcement*: The process of announcing a new public channel to
|
||||
the wider network so it can be used for routing purposes.
|
||||
|
||||
* *Channel Graph Syncing*: The process of downloading & verifying the channel
|
||||
graph.
|
||||
|
||||
|
||||
Notice how messages that belong to the same category typically share an
|
||||
adjacent _message type_ as well. This is done on purpose in order to group
|
||||
semantically similar messages together within the specification itself.
|
||||
|
||||
=== Message Structure
|
||||
|
||||
We now detail each message category in order to define
|
||||
the precise structure and semantics of all defined messages within the LN
|
||||
protocol.
|
||||
|
||||
==== Connection Establishment Messages
|
||||
|
||||
Messages in this category are the very first message sent between peers once
|
||||
they establish a transport connection. At the time of writing of this chapter,
|
||||
there exists only a single messages within this category, the `init` message.
|
||||
The `init` message is sent by _both_ sides of the connection once it has been
|
||||
first established. No other messages are to be sent before the `init` message
|
||||
has been sent by both parties.
|
||||
|
||||
The structure of the `init` message is defined as follows:
|
||||
|
||||
[[apdx_init_message]]
|
||||
===== The `init` message
|
||||
|
||||
* type: *16*
|
||||
* fields:
|
||||
** `uint16`: `global_features_len`
|
||||
** `global_features_len*byte`: `global_features`
|
||||
** `uint16`: `features_len`
|
||||
** `features_len*byte`: `features`
|
||||
** `tlv_stream_tlvs`
|
||||
|
||||
Structurally, the `init` message is composed of two variable size bytes slices
|
||||
that each store a set of _feature bits_. As we see in <<feature_bits>>, feature bits are a
|
||||
primitive used within the protocol in order to advertise the set of protocol
|
||||
features a node either understands (optional features), or demands (required
|
||||
features).
|
||||
|
||||
Note that modern node implementations will only use the `features` field, with
|
||||
items residing within the `global_features` vector for primarily _historical_
|
||||
purposes (backwards compatibility).
|
||||
|
||||
What follows after the core message is a series of T.L.V, or Type Length Value
|
||||
records which can be used to extend the message in a forwards+backwards
|
||||
compatible manner in the future. We'll cover what TLV records are and how
|
||||
they're used later in the chapter.
|
||||
|
||||
An `init` message is then examined by a peer in order to determine if the
|
||||
connection is well defined based on the set of optional and required feature
|
||||
bits advertised by both sides.
|
||||
|
||||
An optional feature means that a peer knows about a feature, but they don't
|
||||
consider it critical to the operation of a new connection. An example of one
|
||||
would be something like the ability to understand the semantics of a newly
|
||||
added field to an existing message.
|
||||
|
||||
On the other hand, required feature indicate that if the other peer doesn't
|
||||
know about the feature, then the connection isn't well defined. An example of
|
||||
such a feature would be a theoretical new channel type within the protocol: if
|
||||
your peer doesn't know of this feature, they you don't want to keep the
|
||||
connection as they're unable to open your new preferred channel type.
|
||||
|
||||
==== Error Communication Messages
|
||||
|
||||
Messages in this category are used to send connection level errors between two
|
||||
peers. Another type of error exists in the protocol: an
|
||||
HTLC forwarding level error. Connection level errors may signal things like
|
||||
feature bit incompatibility, or the intent to force close (unilaterally
|
||||
broadcast the latest signed commitment)
|
||||
|
||||
The sole message in this category is the `error` message:
|
||||
|
||||
[[apdx_error_message]]
|
||||
===== The `error` message
|
||||
|
||||
* type: *17*
|
||||
* fields:
|
||||
** `channel_id` : `chan_id`
|
||||
** `uint16` : `data_len`
|
||||
** `data_len*byte` : `data`
|
||||
|
||||
An `error` message can be sent within the scope of a particular channel by
|
||||
setting the `channel_id`, to the `channel_id` of the channel under going this
|
||||
new error state. Alternatively, if the error applies to the connection in
|
||||
general, then the `channel_id` field should be set to all zeroes. This all zero
|
||||
`channel_id` is also known as the connection level identifier for an error.
|
||||
|
||||
Depending on the nature of the error, sending an `error` message to a peer you
|
||||
have a channel with may indicate that the channel cannot continue without
|
||||
manual intervention, so the only option at that point is to force close the
|
||||
channel by broadcasting the latest commitment state of the channel.
|
||||
|
||||
==== Connection Liveness
|
||||
|
||||
Messages in this section are used to probe to determine if a connection is
|
||||
still live or not. As the LN protocol somewhat abstracts over the underlying
|
||||
transport being used to transmit the messages, a set of protocol level `ping`
|
||||
and `pong` messages are defined.
|
||||
|
||||
[[apdx_ping_message]]
|
||||
===== The `ping` message
|
||||
|
||||
* type: *18*
|
||||
* fields:
|
||||
** `uint16` : `num_pong_bytes`
|
||||
** `uint16` : `ping_body_len`
|
||||
** `ping_body_len*bytes` : `ping_body`
|
||||
|
||||
Next it's companion, the `pong` message.
|
||||
|
||||
[[apdx_pong_message]]
|
||||
===== The `pong` message
|
||||
|
||||
* type: *19*
|
||||
* fields:
|
||||
** `uint16` : `pong_body_len`
|
||||
** `ping_body_len*bytes` : `pong_body`
|
||||
|
||||
A `ping` message can be sent by either party at any time.
|
||||
|
||||
The `ping` message includes a `num_pong_bytes` field that is used to instruct
|
||||
the receiving node with respect to how large the payload it sends in its `pong`
|
||||
message is. The `ping` message also includes a `ping_body` opaque set of bytes
|
||||
which can be safely ignored. It only serves to allow a sender to pad out `ping`
|
||||
messages they send, which can be useful in attempting to thwart certain
|
||||
de-anonymization techniques based on packet sizes on the wire.
|
||||
|
||||
A `pong` message should be sent in response to a received `ping` message. The
|
||||
receiver should read a set of `num_pong_bytes` random bytes to send back as the
|
||||
`pong_body` field. Clever use of these fields/messages may allow a privacy
|
||||
concious routing node to attempt to thwart certain classes of network
|
||||
de-anonymization attempts, as they can create a "fake" transcript that
|
||||
resembles other messages based on the packet sizes set across. Remember that by
|
||||
default the LN uses an _encrypted_ transport, so a passive network monitor
|
||||
cannot read the plaintext bytes, thus only has timing and packet sizes to go
|
||||
off of.
|
||||
|
||||
==== Channel Funding
|
||||
|
||||
As we go on, we enter into the territory of the core messages that govern the
|
||||
functionality and semantics of the Lightning Protocol. In this section, we
|
||||
explore the messages sent during the process of creating a new channel. We'll
|
||||
only describe the fields used as we leave a in in-depth analysis of the
|
||||
funding process to <<payment_channels>>.
|
||||
|
||||
Messages that are sent during the channel funding flow belong to the following
|
||||
set of 5 messages: `open_channel`, `accept_channel`, `funding_created`,
|
||||
`funding_signed`, `funding_locked`.
|
||||
|
||||
The detailed protocol flow using these messages is described in <<payment_channels>>.
|
||||
|
||||
[[apdx_open_channel_message]]
|
||||
===== The `open_channel` message
|
||||
|
||||
* type: *32*
|
||||
* fields:
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `32*byte` : `temp_chan_id`
|
||||
** `uint64` : `funding_satoshis`
|
||||
** `uint64` : `push_msat`
|
||||
** `uint64` : `dust_limit_satoshis`
|
||||
** `uint64` : `max_htlc_value_in_flight_msat`
|
||||
** `uint64` : `channel_reserve_satoshis`
|
||||
** `uint64` : `htlc_minimum_msat`
|
||||
** `uint32` : `feerate_per_kw`
|
||||
** `uint16` : `to_self_delay`
|
||||
** `uint16` : `max_accepted_htlcs`
|
||||
** `pubkey` : `funding_pubkey`
|
||||
** `pubkey` : `revocation_basepoint`
|
||||
** `pubkey` : `payment_basepoint`
|
||||
** `pubkey` : `delayed_payment_basepoint`
|
||||
** `pubkey` : `htlc_basepoint`
|
||||
** `pubkey` : `first_per_commitment_point`
|
||||
** `byte` : `channel_flags`
|
||||
** `tlv_stream` : `tlvs`
|
||||
|
||||
This is the first message sent when a node wishes to execute a new funding flow
|
||||
with another node. This message contains all the necessary information required
|
||||
for both peers to constructs both the funding transaction as well as the
|
||||
commitment transaction.
|
||||
|
||||
At the time of writing of this chapter, a single TLV record is defined within
|
||||
the set of optional TLV records that may be appended to the end of a defined
|
||||
message:
|
||||
|
||||
* type: *0*
|
||||
* data: `upfront_shutdown_script`
|
||||
|
||||
The `upfront_shutdown_script` is a variable sized byte slice that MUST be a
|
||||
valid public key script as accepted by the Bitcoin networks' consensus
|
||||
algorithm. By providing such an address, the sending party is able to
|
||||
effectively create a "closed loop" for their channel, as neither side will sign
|
||||
off an cooperative closure transaction that pays to any other address. In
|
||||
practice, this address is usually one derived from a cold storage wallet.
|
||||
|
||||
The `channel_flags` field is a bitfield of which at the time of writing, only
|
||||
the _first_ bit has any sort of significance. If this bit is set, then this
|
||||
denotes that this channel is to be advertised to the public network as a route
|
||||
bal channel. Otherwise, the channel is considered to be unadvertised, also
|
||||
commonly referred to as a "private" channel.
|
||||
|
||||
The `accept_channel` message is the response to the `open_channel` message:
|
||||
|
||||
[[apdx_accept_channel_message]]
|
||||
===== The `accept_channel` message
|
||||
|
||||
* type: *33*
|
||||
* fields:
|
||||
** `32*byte` : `temp_chan_id`
|
||||
** `uint64` : `dust_limit_satoshis`
|
||||
** `uint64` : `max_htlc_value_in_flight_msat`
|
||||
** `uint64` : `channel_reserve_satoshis`
|
||||
** `uint64` : `htlc_minimum_msat`
|
||||
** `uint32` : `minimum_depth`
|
||||
** `uint16` : `to_self_delay`
|
||||
** `uint16` : `max_accepted_htlcs`
|
||||
** `pubkey` : `funding_pubkey`
|
||||
** `pubkey` : `revocation_basepoint`
|
||||
** `pubkey` : `payment_basepoint`
|
||||
** `pubkey` : `delayed_payment_basepoint`
|
||||
** `pubkey` : `htlc_basepoint`
|
||||
** `pubkey` : `first_per_commitment_point`
|
||||
** `tlv_stream` : `tlvs`
|
||||
|
||||
The `accept_channel` message is the second message sent during the funding flow
|
||||
process. It serves to acknowledge an intent to open a channel with a new remote
|
||||
peer. The message mostly echoes the set of parameters that the responder wishes
|
||||
to apply to their version of the commitment transaction. In <<payment_channels>>,
|
||||
when we go into the funding process in details, we do a deep dive to explore
|
||||
the implications of the various parameters that can be set when opening a new
|
||||
channel.
|
||||
|
||||
In response, the initiator will send the `funding_created` message.
|
||||
|
||||
[[apdx_funding_created_message]]
|
||||
===== The `funding_created` message
|
||||
|
||||
* type: *34*
|
||||
* fields:
|
||||
** `32*byte` : `temp_chan_id`
|
||||
** `32*byte` : `funding_txid`
|
||||
** `uint16` : `funding_output_index`
|
||||
** `sig` : `commit_sig`
|
||||
|
||||
Once the initiator of a channel receives the `accept_channel` message from the
|
||||
responder, they they have all the materials they need in order to construct the
|
||||
commitment transaction, as well as the funding transaction. As channels by
|
||||
default are single funder (only one side commits funds), only the initiator
|
||||
needs to construct the funding transaction. As a result, in order to allow the
|
||||
responder to sign a version of a commitment transaction for the initiator, the
|
||||
initiator, only needs to send the funding outpoint of the channel.
|
||||
|
||||
To conclude the responder sends the `funding_signed` message.
|
||||
|
||||
[[apdx_funding_signed_message]]
|
||||
===== The `funding_signed` message
|
||||
|
||||
* type: *34*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `sig` : `signature`
|
||||
|
||||
To conclude after the responder receivers the `funding_created` message, they
|
||||
now own a valid signature of the commitment transaction by the initiator. With
|
||||
this signature they're able to exit the channel at any time by signing their
|
||||
half of the multi-sig funding output, and broadcasting the transaction. This is
|
||||
referred to as a force close. In order to give the initiator the ability to do
|
||||
so was well, before the channel can be used, the responder then signs the
|
||||
initiator's commitment transaction as well.
|
||||
|
||||
Once this message has been received by the initiator, it's safe for them to
|
||||
broadcast the funding transaction, as they're now able to exit the channel
|
||||
agreement unilaterally.
|
||||
|
||||
Once the funding transaction has received enough confirmations, the
|
||||
`funding_locked` is sent.
|
||||
|
||||
[[apdx_funding_locked_message]]
|
||||
===== The `funding_locked` message
|
||||
|
||||
* type: *36*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `pubkey` : `next_per_commitment_point`
|
||||
|
||||
Once the funding transaction obtains a `minimum_depth` number of confirmations,
|
||||
then the `funding_locked` message is to be sent by both sides. Only after this
|
||||
message has been received, and sent can the channel being to be used.
|
||||
|
||||
==== Channel Closing
|
||||
|
||||
Channel closing is a multi-step process. One node initiates by sending the `shutdown` message. The two channel partners then exchange a series of `channel_closing` messages to negotiate mutually acceptable fees for the closing transaction. The channel funder sends the first `closing_signed` message and the other side can accept by sending a `closing_signed` message with the same fee values.
|
||||
|
||||
[[apdx_shutdown_message]]
|
||||
===== The `shutdown` message
|
||||
|
||||
* type: *38*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `u16` : `len`
|
||||
** `len*byte` : `scriptpubkey`
|
||||
|
||||
[[apdx_closing_signed_message]]
|
||||
===== The `closing_signed` message
|
||||
|
||||
* type: *39*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `u64` : `fee_satoshis`
|
||||
** `signature` : `signature`
|
||||
|
||||
==== Channel Operation
|
||||
|
||||
In this section, we briefly describe the set of messages used to allow
|
||||
nodes to operate a channel. By operation, we mean being able to send receive,
|
||||
and forward payments for a given channel.
|
||||
|
||||
In order to send, receive or forward a payment over a channel, an HTLC must
|
||||
first be added to both commitment transactions that comprise of a channel link.
|
||||
|
||||
The `update_add_htlc` message allows either side to add a new HTLC to the
|
||||
opposite commitment transaction.
|
||||
|
||||
[[apdx_update_add_htlc_message]]
|
||||
===== The `update_add_htlc` message
|
||||
|
||||
* type: *128*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `uint64` : `id`
|
||||
** `uint64` : `amount_msat`
|
||||
** `sha256` : `payment_hash`
|
||||
** `uint32` : `cltv_expiry`
|
||||
** `1366*byte` : `onion_routing_packet`
|
||||
|
||||
Sending this message allows one party to initiate either sending a new payment,
|
||||
or forwarding an existing payment that arrived via in incoming channel. The
|
||||
message specifies the amount (`amount_msat`) along with the payment hash that
|
||||
unlocks the payment itself. The set of forwarding instructions of the next hop
|
||||
are onion encrypted within the `onion_routing_packet` field. In <<onion_routing>>, on
|
||||
multi-hop HTLC forwarding, we detail the onion routing protocol used in the
|
||||
Lighting Network in detail.
|
||||
|
||||
Note that each HTLC sent uses an auto incrementing ID which is used by any
|
||||
message which modifies an HTLC (settle or cancel) to reference the HTLC in a
|
||||
unique manner scoped to the channel.
|
||||
|
||||
The `update_fulfill_hltc` allow redemption (receipt) of an active HTLC.
|
||||
|
||||
[[apdx_update_fulfill_hltc_message]]
|
||||
===== The `update_fulfill_hltc` message
|
||||
|
||||
* type: *130*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `uint64` : `id`
|
||||
** `32*byte` : `payment_preimage`
|
||||
|
||||
This message is sent by the HTLC receiver to the proposer in order to redeem an
|
||||
active HTLC. The message references the `id` of the HTLC in question, and also
|
||||
provides the pre-image (which unlocks the HLTC) as well.
|
||||
|
||||
The `update_fail_htlc` is sent to remove an HTLC from a commitment transaction.
|
||||
|
||||
[[apdx_update_fail_htlc_message]]
|
||||
===== The `update_fail_htlc` message
|
||||
|
||||
* type: *131*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `uint64` : `id`
|
||||
** `uint16` : `len`
|
||||
** `len*byte` : `reason`
|
||||
|
||||
The `update_fail_htlc` is the opposite of the `update_fulfill_hltc` message as
|
||||
it allows the receiver of an HTLC to remove the very same HTLC. This message is
|
||||
typically sent when an HTLC cannot be properly routed upstream, and needs to be
|
||||
sent back to the sender in order to unravel the HTLC chain. As we explore in
|
||||
Chapter XX, the message contains an _encrypted_ failure reason (`reason`) which
|
||||
may allow the sender to either adjust their payment route, or terminate if the
|
||||
failure itself is a terminal one.
|
||||
|
||||
The `commitment_signed` message is used to stamp the creation of a new commitment transaction
|
||||
|
||||
[[apdx_commitment_signed_message]]
|
||||
===== The `commitment_signed` message
|
||||
|
||||
* type: *132*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `sig` : `signature`
|
||||
** `uint16` : `num_htlcs`
|
||||
** `num_htlcs*sig` : `htlc_signature`
|
||||
|
||||
In addition to sending a signature for the next commitment transaction, the
|
||||
sender of this message also needs to send a signature for each HTLC that's
|
||||
present on the commitment transaction. This is due to the existence of the
|
||||
|
||||
|
||||
The `revoke_and_ack` is sent to revoke a dated commitment:
|
||||
|
||||
[[apdx_revoke_and_ack_message]]
|
||||
===== The `revoke_and_ack` message
|
||||
|
||||
* type: *133*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `32*byte` : `per_commitment_secret`
|
||||
** `pubkey` : `next_per_commitment_point`
|
||||
|
||||
As the Lightning Network uses a replace-by-revoke commitment transaction, after
|
||||
receiving a new commitment transaction via the `commit_sig` message, a party
|
||||
must revoke their past commitment before they're able to receive another one.
|
||||
While revoking a commitment transaction, the revoker then also provides the
|
||||
next commitment point that's required to allow the other party to send them a
|
||||
new commitment state.
|
||||
|
||||
The `update_fee` is sent to update the fee on the current commitment
|
||||
transactions.
|
||||
|
||||
[[apdx_update_fee_message]]
|
||||
===== The `update_fee` message
|
||||
|
||||
* type: *134*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `uint32` : `feerate_per_kw`
|
||||
|
||||
This message can only be sent by the initiator of the channel they're the ones
|
||||
that will pay for the commitment fee of the channel as along as it's open.
|
||||
|
||||
The `update_fail_malformed_htlc` is sent to remove a corrupted HTLC:
|
||||
|
||||
|
||||
[[apdx_update_fail_malformed_htlc_message]]
|
||||
===== The `update_fail_malformed_htlc` message
|
||||
|
||||
* type: *135*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `uint64` : `id`
|
||||
** `sha256` : `sha256_of_onion`
|
||||
** `uint16` : `failure_code`
|
||||
|
||||
This message is similar to the `update_fail_htlc` but it's rarely used in
|
||||
practice. As mentioned above, each HTLC carries an onion encrypted routing
|
||||
packet that also covers the integrity of portions of the HTLC itself. If a
|
||||
party receives an onion packet that has somehow been corrupted along the way,
|
||||
then it won't be able to decrypt the packet. As a result it also can't properly
|
||||
forward the HTLC, therefore it'll send this message to signify that the HTLC
|
||||
has been corrupted somewhere along the route back to the sender.
|
||||
|
||||
==== Channel Announcement
|
||||
|
||||
Messages in this category are used to announce components of the Channel Graph
|
||||
authenticated data structure to the wider network. The Channel Graph has a
|
||||
series of unique properties due to the condition that all data added to the
|
||||
channel graph MUST also be anchored in the base Bitcoin blockchain. As a
|
||||
result, in order to add a new entry to the channel graph, an agent must be an
|
||||
on chain transaction fee. This serves as a natural spam de tenace for the
|
||||
Lightning Network.
|
||||
|
||||
The `channel_announcement` is used to announce a new channel to the wider
|
||||
network.
|
||||
|
||||
[[apdx_channel_announcement_message]]
|
||||
===== The `channel_announcement` message
|
||||
|
||||
* type: *256*
|
||||
* fields:
|
||||
** `sig` : `node_signature_1`
|
||||
** `sig` : `node_signature_2`
|
||||
** `sig` : `bitcoin_signature_1`
|
||||
** `sig` : `bitcoin_signature_2`
|
||||
** `uint16` : `len`
|
||||
** `len*byte` : `features`
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `short_channel_id` : `short_channel_id`
|
||||
** `pubkey` : `node_id_1`
|
||||
** `pubkey` : `node_id_2`
|
||||
** `pubkey` : `bitcoin_key_1`
|
||||
** `pubkey` : `bitcoin_key_2`
|
||||
|
||||
The series of signatures and public keys in the message serves to create a
|
||||
_proof_ that the channel actually exists within the base Bitcoin blockchain. As
|
||||
we detail in <<scid>>, each channel is uniquely identified by a locator
|
||||
that encodes it's _location_ within the blockchain. This locator is called this
|
||||
`short_channel_id` and can fit into a 64-bit integer.
|
||||
|
||||
The `node_announcement` allows a node to announce/update it's vertex within the
|
||||
greater Channel Graph.
|
||||
|
||||
[[apdx_node_announcement_message]]
|
||||
===== The `node_announcement` message
|
||||
|
||||
* type: *257*
|
||||
* fields:
|
||||
** `sig` : `signature`
|
||||
** `uint64` : `flen`
|
||||
** `flen*byte` : `features`
|
||||
** `uint32` : `timestamp`
|
||||
** `pubkey` : `node_id`
|
||||
** `3*byte` : `rgb_color`
|
||||
** `32*byte` : `alias`
|
||||
** `uint16` : `addrlen`
|
||||
** `addrlen*byte` : `addresses`
|
||||
|
||||
Note that if a node doesn't have any advertised channel within the Channel
|
||||
Graph, then this message is ignored in order to ensure that adding an item to
|
||||
the Channel Graph bares an on-chain cost. In this case, the on-chain cost will
|
||||
the cost of creating the channel which this node is connected to.
|
||||
|
||||
In addition to advertising its feature set, this message also allows a node to
|
||||
announce/update the set of network `addresses` that it can be reached at.
|
||||
|
||||
The `channel_update` message is sent to update the properties and policies of
|
||||
an active channel edge within the Channel graph.
|
||||
|
||||
[[apdx_channel_update_message]]
|
||||
===== The `channel_update` message
|
||||
|
||||
* type: *258*
|
||||
* fields:
|
||||
** `signature` : `signature`
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `short_channel_id` : `short_channel_id`
|
||||
** `uint32` : `timestamp`
|
||||
** `byte` : `message_flags`
|
||||
** `byte` : `channel_flags`
|
||||
** `uint16` : `cltv_expiry_delta`
|
||||
** `uint64` : `htlc_minimum_msat`
|
||||
** `uint32` : `fee_base_msat`
|
||||
** `uint32` : `fee_proportional_millionths`
|
||||
** `uint16` : `htlc_maximum_msat`
|
||||
|
||||
In addition to being able to enable/disable a channel this message allows a
|
||||
node to update it's routing fees as well as other fields that shape the type of
|
||||
payment that is permitted to flow through this channel.
|
||||
|
||||
The `announce_signatures` message is exchange by channel peers in order to
|
||||
assemble the set of signatures required to produce a `channel_announcement`
|
||||
message.
|
||||
|
||||
[[apdx_announce_signatures_message]]
|
||||
===== The `announce_signatures` message
|
||||
|
||||
* type: *259*
|
||||
* fields:
|
||||
** `channel_id` : `channel_id`
|
||||
** `short_channel_id` : `short_channel_id`
|
||||
** `sig` : `node_signature`
|
||||
** `sig` : `bitcoin_signature`
|
||||
|
||||
After the `funding_locked` message has been sent, if both sides wish to
|
||||
advertise their channel to the network, then they'll each send the
|
||||
`announce_signatures` message which allows both sides to emplace the 4
|
||||
signatures required to generate a `announce_signatures` message.
|
||||
|
||||
==== Channel Graph Syncing
|
||||
|
||||
The `query_short_chan_ids` allows a peer to obtain the channel information
|
||||
related to a series of short channel IDs.
|
||||
|
||||
[[apdx_query_short_chan_ids_message]]
|
||||
===== The `query_short_chan_ids` message
|
||||
|
||||
* type: *261*
|
||||
* fields:
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `u16` : `len`
|
||||
** `len*byte` : `encoded_short_ids`
|
||||
** `query_short_channel_ids_tlvs` : `tlvs`
|
||||
|
||||
As we learn in <<gossip>>, these channel IDs may be a series of channels
|
||||
that were new to the sender, or were out of date which allows the sender to
|
||||
obtain the latest set of information for a set of channels.
|
||||
|
||||
The `reply_short_chan_ids_end` message is sent after a peer finishes responding
|
||||
to a prior `query_short_chan_ids` message.
|
||||
|
||||
[[apdx_reply_short_chan_ids_end_message]]
|
||||
===== The `reply_short_chan_ids_end` message
|
||||
|
||||
* type: *262*
|
||||
* fields:
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `byte` : `full_information`
|
||||
|
||||
This message signals to the receiving party that if they wish to send another
|
||||
query message, they can now do so.
|
||||
|
||||
The `query_channel_range` message allows a node to query for the set of channel
|
||||
opened within a block range.
|
||||
|
||||
[[apdx_query_channel_range_message]]
|
||||
===== The `query_channel_range` message
|
||||
|
||||
* type: *263*
|
||||
* fields:
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `u32` : `first_blocknum`
|
||||
** `u32` : `number_of_blocks`
|
||||
** `query_channel_range_tlvs` : `tlvs`
|
||||
|
||||
|
||||
As channels are represented using a short channel ID that encodes the location
|
||||
of a channel in the chain, a node on the network can use a block height as a
|
||||
sort of _cursor_ to seek through the chain in order to discover a set of newly
|
||||
opened channels.
|
||||
|
||||
The `reply_channel_range` message is the response to `query_channel_range` and
|
||||
includes the set of short channel IDs for known channels within that range.
|
||||
|
||||
[[apdx_reply_channel_range_message]]
|
||||
===== The `reply_channel_range` message
|
||||
|
||||
* type: *264*
|
||||
* fields:
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `u32` : `first_blocknum`
|
||||
** `u32` : `number_of_blocks`
|
||||
** `byte` : `sync_complete`
|
||||
** `u16` : `len`
|
||||
** `len*byte` : `encoded_short_ids`
|
||||
** `reply_channel_range_tlvs` : `tlvs`
|
||||
|
||||
As a response to `query_channel_range`, this message sends back the set of
|
||||
channels that were opened within that range. This process can be repeated with
|
||||
the requester advancing their cursor further down the chain in order to
|
||||
continue syncing the Channel Graph.
|
||||
|
||||
The `gossip_timestamp_range` message allows a peer to start receiving new
|
||||
incoming gossip messages on the network.
|
||||
|
||||
[[apdx_gossip_timestamp_range_message]]
|
||||
===== The `gossip_timestamp_range` message
|
||||
|
||||
* type: *265*
|
||||
* fields:
|
||||
** `chain_hash` : `chain_hash`
|
||||
** `u32` : `first_timestamp`
|
||||
** `u32` : `timestamp_range`
|
||||
|
||||
Once a peer has synced the channel graph, they can send this message if they
|
||||
wish to receive real-time updates on changes in the Channel Graph. They can
|
||||
also set the `first_timestamp` and `timestamp_range` fields if they wish to
|
||||
receive a backlog of updates they may have missed while they were down.
|
@ -1,16 +1,126 @@
|
||||
#!make
|
||||
#
|
||||
# Makefile to help with building, pulling and pushing containers
|
||||
#
|
||||
# NOTE: You cannot push to the container registry unless you are authorized
|
||||
# in the lnbook organization (i.e. one of the authors or maintainers)
|
||||
#
|
||||
# Targets:
|
||||
#
|
||||
# make build # Build all containers
|
||||
# make pull # Pull all containers from the registry
|
||||
# make build-bitcoind # Build a specific container
|
||||
# make clean # remove all images and containers
|
||||
# make push # push updated images to Docker Hub (authors/maintainers only)
|
||||
|
||||
|
||||
# Latest tested versions of Bitcoin and Lightning clients
|
||||
|
||||
# OS base image
|
||||
OS=ubuntu
|
||||
OS_VER=focal
|
||||
|
||||
# bitcoind version
|
||||
BITCOIND_VER=0.21.0
|
||||
|
||||
# LND version
|
||||
GO_VER=1.13
|
||||
LND_VER=v0.13.1-beta
|
||||
|
||||
# c-lightning version
|
||||
CL_VER=0.10.1
|
||||
|
||||
# Eclair version
|
||||
ECLAIR_VER=0.4.2
|
||||
ECLAIR_COMMIT=52444b0
|
||||
|
||||
|
||||
|
||||
|
||||
# Docker registry for lnbook
|
||||
REGISTRY=docker.com
|
||||
NAME=lnbook
|
||||
ORG=lnbook
|
||||
|
||||
# List of containers
|
||||
CONTAINERS=bitcoind lnd eclair c-lightning
|
||||
|
||||
all: build-all push-all
|
||||
.DEFAULT: pull
|
||||
|
||||
|
||||
|
||||
|
||||
build-all:
|
||||
for container in ${CONTAINERS}; do \
|
||||
docker build -t ${NAME}/$$container $$container -f $$container/Dockerfile; \
|
||||
done
|
||||
|
||||
push-all:
|
||||
build-bitcoind:
|
||||
docker build \
|
||||
--build-arg OS=${OS} \
|
||||
--build-arg OS_VER=${OS_VER} \
|
||||
--build-arg BITCOIND_VER=${BITCOIND_VER} \
|
||||
-t ${ORG}/bitcoind:${BITCOIND_VER} \
|
||||
bitcoind -f bitcoind/Dockerfile
|
||||
docker image tag ${ORG}/bitcoind:${BITCOIND_VER} ${ORG}/bitcoind:latest
|
||||
|
||||
|
||||
build-cl: build-bitcoind
|
||||
docker build \
|
||||
--build-arg OS=${OS} \
|
||||
--build-arg OS_VER=${OS_VER} \
|
||||
--build-arg CL_VER=${CL_VER} \
|
||||
-t ${ORG}/c-lightning:${CL_VER} \
|
||||
c-lightning -f c-lightning/Dockerfile
|
||||
docker image tag ${ORG}/c-lightning:${CL_VER} ${ORG}/c-lightning:latest
|
||||
|
||||
|
||||
build-lnd:
|
||||
docker build \
|
||||
--build-arg OS=${OS} \
|
||||
--build-arg OS_VER=${OS_VER} \
|
||||
--build-arg LND_VER=${LND_VER} \
|
||||
--build-arg GO_VER=${GO_VER} \
|
||||
-t ${ORG}/lnd:${LND_VER}_golang_${GO_VER} \
|
||||
lnd -f lnd/Dockerfile
|
||||
docker image tag ${ORG}/lnd:${LND_VER}_golang_${GO_VER} ${ORG}/lnd:latest
|
||||
|
||||
|
||||
build-eclair:
|
||||
docker build \
|
||||
--build-arg OS=${OS} \
|
||||
--build-arg OS_VER=${OS_VER} \
|
||||
--build-arg ECLAIR_VER=${ECLAIR_VER} \
|
||||
--build-arg ECLAIR_COMMIT=${ECLAIR_COMMIT} \
|
||||
-t ${ORG}/eclair:${ECLAIR_VER}-${ECLAIR_COMMIT} \
|
||||
eclair -f eclair/Dockerfile
|
||||
docker image tag ${ORG}/eclair:${ECLAIR_VER}-${ECLAIR_COMMIT} ${ORG}/eclair:latest
|
||||
|
||||
|
||||
push-bitcoind: build-bitcoind
|
||||
docker push ${ORG}/bitcoind:${BITCOIND_VER}
|
||||
docker push ${ORG}/bitcoind:latest
|
||||
|
||||
push-lnd: build-lnd
|
||||
docker push ${ORG}/lnd:${LND_VER}_golang_${GO_VER}
|
||||
docker push ${ORG}/lnd:latest
|
||||
|
||||
push-cl: build-cl
|
||||
docker push ${ORG}/c-lightning:${CL_VER}
|
||||
docker push ${ORG}/c-lightning:latest
|
||||
|
||||
push-eclair: build-eclair
|
||||
docker push ${ORG}/eclair:${ECLAIR_VER}-${ECLAIR_COMMIT}
|
||||
docker push ${ORG}/eclair:latest
|
||||
|
||||
build: build-bitcoind build-lnd build-cl build-eclair
|
||||
|
||||
push: push-bitcoind push-lnd push-cl push-eclair
|
||||
|
||||
pull:
|
||||
for container in ${CONTAINERS}; do \
|
||||
docker push ${NAME}/$$container; \
|
||||
docker pull ${ORG}/$$container:latest ;\
|
||||
done
|
||||
|
||||
clean:
|
||||
# Try 'make clean-confirm' if you are sure you want to do this.
|
||||
# CAUTION: ALL docker containers and images on your computer will be removed.
|
||||
|
||||
clean-confirm:
|
||||
docker rm -f `docker ps -qa`
|
||||
docker rmi -f `docker image ls -qa`
|
||||
|
@ -1,23 +1,36 @@
|
||||
#!/bin/bash
|
||||
set -Eeuo pipefail
|
||||
|
||||
echo Starting bitcoind...
|
||||
|
||||
# Start bitcoind
|
||||
echo "Starting bitcoind..."
|
||||
bitcoind -datadir=/bitcoind -daemon
|
||||
until bitcoin-cli -datadir=/bitcoind getblockchaininfo > /dev/null 2>&1
|
||||
|
||||
# Wait for bitcoind startup
|
||||
echo -n "Waiting for bitcoind to start"
|
||||
until bitcoin-cli -datadir=/bitcoind -rpcwait getblockchaininfo > /dev/null 2>&1
|
||||
do
|
||||
echo -n "."
|
||||
sleep 1
|
||||
done
|
||||
echo bitcoind started
|
||||
echo
|
||||
echo "bitcoind started"
|
||||
|
||||
|
||||
# Load private key into wallet
|
||||
export address=`cat /bitcoind/keys/demo_address.txt`
|
||||
export privkey=`cat /bitcoind/keys/demo_privkey.txt`
|
||||
|
||||
# If restarting the wallet already exists, so don't fail if it does,
|
||||
# just load the existing wallet:
|
||||
bitcoin-cli -datadir=/bitcoind createwallet regtest > /dev/null || bitcoin-cli -datadir=/bitcoind loadwallet regtest > /dev/null
|
||||
bitcoin-cli -datadir=/bitcoind importprivkey $privkey > /dev/null || true
|
||||
|
||||
echo "================================================"
|
||||
echo "Importing demo private key"
|
||||
echo "Imported demo private key"
|
||||
echo "Bitcoin address: " ${address}
|
||||
echo "Private key: " ${privkey}
|
||||
echo "================================================"
|
||||
bitcoin-cli -datadir=/bitcoind createwallet regtest
|
||||
bitcoin-cli -datadir=/bitcoind importprivkey $privkey
|
||||
|
||||
# Executing CMD
|
||||
echo "$@"
|
||||
exec "$@"
|
||||
|
@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Helper script used as an alias for bitcoin-cli with the necessary arguments
|
||||
#
|
||||
/usr/bin/bitcoin-cli -datadir=/bitcoind -regtest $@
|
@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Helper script used as an alias for lightning-cli with the necessary arguments
|
||||
#
|
||||
/usr/bin/lightning-cli --lightning-dir=/lightningd $@
|
@ -1,8 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# a small script to help sanity check the versions of the different node implementations
|
||||
dockerfiles=$(find . -name 'Dockerfile')
|
||||
# print location of dockerfiles
|
||||
echo $dockerfiles
|
||||
# print variables
|
||||
awk '/ENV/ && /VER|COMMIT/' $dockerfiles
|
||||
awk '/ARG/ && /VER|COMMIT/' $dockerfiles
|
||||
|
@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Helper script used as an alias for eclair-cli with the necessary arguments
|
||||
#
|
||||
/usr/local/bin/eclair-cli -s -j -p eclair $@
|
@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Helper script used as an alias for lncli with the necessary arguments
|
||||
#
|
||||
/go/bin/lncli --lnddir=/lnd -n regtest $@
|
@ -0,0 +1,177 @@
|
||||
#!/bin/bash
|
||||
|
||||
#
|
||||
# Helper functions
|
||||
#
|
||||
|
||||
|
||||
# run-in-node: Run a command inside a docker container, using the bash shell
|
||||
function run-in-node () {
|
||||
docker exec "$1" /bin/bash -c "${@:2}"
|
||||
}
|
||||
|
||||
# wait-for-cmd: Run a command repeatedly until it completes/exits successfuly
|
||||
function wait-for-cmd () {
|
||||
until "${@}" > /dev/null 2>&1
|
||||
do
|
||||
echo -n "."
|
||||
sleep 1
|
||||
done
|
||||
echo
|
||||
}
|
||||
|
||||
# wait-for-node: Run a command repeatedly until it completes successfully, inside a container
|
||||
# Combining wait-for-cmd and run-in-node
|
||||
function wait-for-node () {
|
||||
wait-for-cmd run-in-node $1 "${@:2}"
|
||||
}
|
||||
|
||||
|
||||
# Start the demo
|
||||
echo "Starting Payment Demo"
|
||||
|
||||
echo "======================================================"
|
||||
echo
|
||||
echo "Waiting for nodes to startup"
|
||||
echo -n "- Waiting for bitcoind startup..."
|
||||
wait-for-node bitcoind "cli getblockchaininfo | jq -e \".blocks > 101\""
|
||||
echo -n "- Waiting for bitcoind mining..."
|
||||
wait-for-node bitcoind "cli getbalance | jq -e \". > 50\""
|
||||
echo -n "- Waiting for Alice startup..."
|
||||
wait-for-node Alice "cli getinfo"
|
||||
echo -n "- Waiting for Bob startup..."
|
||||
wait-for-node Bob "cli getinfo"
|
||||
echo -n "- Waiting for Chan startup..."
|
||||
wait-for-node Chan "cli getinfo"
|
||||
echo -n "- Waiting for Dina startup..."
|
||||
wait-for-node Dina "cli getinfo"
|
||||
echo "All nodes have started"
|
||||
|
||||
echo "======================================================"
|
||||
echo
|
||||
echo "Getting node IDs"
|
||||
alice_address=$(run-in-node Alice "cli getinfo | jq -r .identity_pubkey")
|
||||
bob_address=$(run-in-node Bob "cli getinfo | jq -r .id")
|
||||
chan_address=$(run-in-node Chan "cli getinfo| jq -r .nodeId")
|
||||
dina_address=$(run-in-node Dina "cli getinfo | jq -r .identity_pubkey")
|
||||
|
||||
# Show node IDs
|
||||
echo "- Alice: ${alice_address}"
|
||||
echo "- Bob: ${bob_address}"
|
||||
echo "- Chan: ${chan_address}"
|
||||
echo "- Dina: ${dina_address}"
|
||||
|
||||
echo "======================================================"
|
||||
echo
|
||||
echo "Waiting for Lightning nodes to sync the blockchain"
|
||||
echo -n "- Waiting for Alice chain sync..."
|
||||
wait-for-node Alice "cli getinfo | jq -e \".synced_to_chain == true\""
|
||||
echo -n "- Waiting for Bob chain sync..."
|
||||
wait-for-node Bob "cli getinfo | jq -e \".blockheight > 100\""
|
||||
echo -n "- Waiting for Chan chain sync..."
|
||||
wait-for-node Chan "cli getinfo | jq -e \".blockHeight > 100\""
|
||||
echo -n "- Waiting for Dina chain sync..."
|
||||
wait-for-node Dina "cli getinfo | jq -e \".synced_to_chain == true\""
|
||||
echo "All nodes synched to chain"
|
||||
|
||||
echo "======================================================"
|
||||
echo
|
||||
echo "Setting up connections and channels"
|
||||
echo "- Alice to Bob"
|
||||
|
||||
# Connect only if not already connected
|
||||
run-in-node Alice "cli listpeers | jq -e '.peers[] | select(.pub_key == \"${bob_address}\")' > /dev/null" \
|
||||
&& {
|
||||
echo "- Alice already connected to Bob"
|
||||
} || {
|
||||
echo "- Open connection from Alice node to Bob's node"
|
||||
wait-for-node Alice "cli connect ${bob_address}@Bob"
|
||||
}
|
||||
|
||||
# Create channel only if not already created
|
||||
run-in-node Alice "cli listchannels | jq -e '.channels[] | select(.remote_pubkey == \"${bob_address}\")' > /dev/null" \
|
||||
&& {
|
||||
echo "- Alice->Bob channel already exists"
|
||||
} || {
|
||||
echo "- Create payment channel Alice->Bob"
|
||||
wait-for-node Alice "cli openchannel ${bob_address} 1000000"
|
||||
}
|
||||
echo "Bob to Chan"
|
||||
run-in-node Bob "cli listpeers | jq -e '.peers[] | select(.id == \"${chan_address}\")' > /dev/null" \
|
||||
&& {
|
||||
echo "- Bob already connected to Chan"
|
||||
} || {
|
||||
echo "- Open connection from Bob's node to Chan's node"
|
||||
wait-for-node Bob "cli connect ${chan_address}@Chan"
|
||||
}
|
||||
run-in-node Bob "cli listchannels | jq -e '.channels[] | select(.destination == \"${chan_address}\")' > /dev/null" \
|
||||
&& {
|
||||
echo "- Bob->Chan channel already exists"
|
||||
} || {
|
||||
echo "- Create payment channel Bob->Chan"
|
||||
wait-for-node Bob "cli fundchannel ${chan_address} 1000000"
|
||||
}
|
||||
echo "Chan to Dina"
|
||||
run-in-node Chan "cli peers | jq -e '.[] | select(.nodeId == \"${dina_address}\" and .state == \"CONNECTED\")' > /dev/null" \
|
||||
&& {
|
||||
echo "- Chan already connected to Dina"
|
||||
} || {
|
||||
echo "- Open connection from Chan's node to Dina's node"
|
||||
wait-for-node Chan "cli connect --uri=${dina_address}@Dina"
|
||||
}
|
||||
run-in-node Chan "cli channels | jq -e '.[] | select(.nodeId == \"${dina_address}\" and .state == \"NORMAL\")' > /dev/null" \
|
||||
&& {
|
||||
echo "- Chan->Dina channel already exists"
|
||||
} || {
|
||||
echo "- Create payment channel Chan->Dina"
|
||||
wait-for-node Chan "cli open --nodeId=${dina_address} --fundingSatoshis=1000000"
|
||||
}
|
||||
echo "All channels created"
|
||||
echo "======================================================"
|
||||
echo
|
||||
echo "Waiting for channels to be confirmed on the blockchain"
|
||||
echo -n "- Waiting for Alice channel confirmation..."
|
||||
wait-for-node Alice "cli listchannels | jq -e '.channels[] | select(.remote_pubkey == \"${bob_address}\" and .active == true)'"
|
||||
echo "- Alice->Bob connected"
|
||||
echo -n "- Waiting for Bob channel confirmation..."
|
||||
wait-for-node Bob "cli listchannels | jq -e '.channels[] | select(.destination == \"${chan_address}\" and .active == true)'"
|
||||
echo "- Bob->Chan connected"
|
||||
echo -n "- Waiting for Chan channel confirmation..."
|
||||
wait-for-node Chan "cli channels | jq -e '.[] | select (.nodeId == \"${dina_address}\" and .state == \"NORMAL\")' > /dev/null"
|
||||
echo "- Chan->Dina connected"
|
||||
echo "All channels confirmed"
|
||||
|
||||
|
||||
echo "======================================================"
|
||||
echo -n "Check Alice's route to Dina: "
|
||||
run-in-node Alice "cli queryroutes --dest \"${dina_address}\" --amt 10000" > /dev/null 2>&1 \
|
||||
&& {
|
||||
echo "Alice has a route to Dina"
|
||||
} || {
|
||||
echo "Alice doesn't yet have a route to Dina"
|
||||
echo "Waiting for Alice graph sync. This may take a while..."
|
||||
wait-for-node Alice "cli describegraph | jq -e '.edges | select(length >= 1)'"
|
||||
echo "- Alice knows about 1 channel"
|
||||
wait-for-node Alice "cli describegraph | jq -e '.edges | select(length >= 2)'"
|
||||
echo "- Alice knows about 2 channels"
|
||||
wait-for-node Alice "cli describegraph | jq -e '.edges | select(length == 3)'"
|
||||
echo "- Alice knows about all 3 channels!"
|
||||
echo "Alice knows about all the channels"
|
||||
}
|
||||
|
||||
echo "======================================================"
|
||||
echo
|
||||
echo "Get 10k sats invoice from Dina"
|
||||
dina_invoice=$(run-in-node Dina "cli addinvoice 10000 | jq -r .payment_request")
|
||||
echo "- Dina invoice: "
|
||||
echo ${dina_invoice}
|
||||
|
||||
echo "======================================================"
|
||||
echo
|
||||
echo "Attempting payment from Alice to Dina"
|
||||
run-in-node Alice "cli payinvoice --json --force ${dina_invoice} | jq -e '.failure_reason == \"FAILURE_REASON_NONE\"'" > /dev/null && {
|
||||
echo "Successful payment!"
|
||||
} ||
|
||||
{
|
||||
echo "Payment failed"
|
||||
}
|
@ -1,37 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo Getting node IDs
|
||||
alice_address=$(docker-compose exec -T Alice bash -c "lncli -n regtest getinfo | jq -r .identity_pubkey")
|
||||
bob_address=$(docker-compose exec -T Bob bash -c "lightning-cli getinfo | jq -r .id")
|
||||
chan_address=$(docker-compose exec -T Chan bash -c "eclair-cli -s -j -p eclair getinfo| jq -r .nodeId")
|
||||
dina_address=$(docker-compose exec -T Dina bash -c "lncli -n regtest getinfo | jq -r .identity_pubkey")
|
||||
|
||||
# Let's tell everyone what we found!
|
||||
echo Alice: ${alice_address}
|
||||
echo Bob: ${bob_address}
|
||||
echo Chan: ${chan_address}
|
||||
echo Dina: ${dina_address}
|
||||
|
||||
echo Setting up channels...
|
||||
echo Alice to Bob
|
||||
docker-compose exec -T Alice lncli -n regtest connect ${bob_address}@Bob
|
||||
docker-compose exec -T Alice lncli -n regtest openchannel ${bob_address} 1000000
|
||||
|
||||
echo Bob to Chan
|
||||
docker-compose exec -T Bob lightning-cli connect ${chan_address}@Chan
|
||||
docker-compose exec -T Bob lightning-cli fundchannel ${chan_address} 1000000
|
||||
|
||||
echo Chan to Dina
|
||||
docker-compose exec -T Chan eclair-cli -p eclair connect --uri=${dina_address}@Dina
|
||||
docker-compose exec -T Chan eclair-cli -p eclair open --nodeId=${dina_address} --fundingSatoshis=1000000
|
||||
|
||||
echo Get 10k sats invoice from Dina
|
||||
dina_invoice=$(docker-compose exec -T Dina bash -c "lncli -n regtest addinvoice 10000 | jq -r .payment_request")
|
||||
|
||||
echo Dina invoice ${dina_invoice}
|
||||
|
||||
echo Wait for channel establishment - 60 seconds for 6 blocks
|
||||
sleep 60
|
||||
|
||||
echo Alice pays Dina 10k sats, routed around the network
|
||||
docker-compose exec -T Alice lncli -n regtest payinvoice --json --inflight_updates -f ${dina_invoice}
|
@ -1,65 +1,65 @@
|
||||
argon2-cffi==20.1.0
|
||||
async-generator==1.10
|
||||
attrs==21.2.0
|
||||
backcall==0.2.0
|
||||
bleach==3.3.1
|
||||
cffi==1.14.6
|
||||
cycler==0.10.0
|
||||
debugpy==1.4.0
|
||||
decorator==4.4.2
|
||||
defusedxml==0.7.1
|
||||
entrypoints==0.3
|
||||
graphviz==0.17
|
||||
ipykernel==6.0.3
|
||||
ipython==7.25.0
|
||||
ipython-genutils==0.2.0
|
||||
ipywidgets==7.6.3
|
||||
jedi==0.18.0
|
||||
Jinja2==3.0.1
|
||||
jsonschema==3.2.0
|
||||
jupyter==1.0.0
|
||||
jupyter-client==6.1.12
|
||||
jupyter-console==6.4.0
|
||||
jupyter-core==4.7.1
|
||||
jupyterlab-pygments==0.1.2
|
||||
jupyterlab-widgets==1.0.0
|
||||
kiwisolver==1.3.1
|
||||
MarkupSafe==2.0.1
|
||||
matplotlib==3.4.2
|
||||
matplotlib-inline==0.1.2
|
||||
mistune==0.8.4
|
||||
nbclient==0.5.3
|
||||
nbconvert==6.1.0
|
||||
nbformat==5.1.3
|
||||
nest-asyncio==1.5.1
|
||||
networkx==2.5.1
|
||||
notebook==6.4.0
|
||||
numpy==1.21.1
|
||||
packaging==21.0
|
||||
pandocfilters==1.4.3
|
||||
parso==0.8.2
|
||||
pexpect==4.8.0
|
||||
pickleshare==0.7.5
|
||||
Pillow==8.3.1
|
||||
prometheus-client==0.11.0
|
||||
prompt-toolkit==3.0.19
|
||||
ptyprocess==0.7.0
|
||||
pycparser==2.20
|
||||
Pygments==2.9.0
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
pyzmq==22.1.0
|
||||
qtconsole==5.1.1
|
||||
QtPy==1.9.0
|
||||
scipy==1.7.0
|
||||
Send2Trash==1.7.1
|
||||
six==1.16.0
|
||||
terminado==0.10.1
|
||||
testpath==0.5.0
|
||||
tornado==6.1
|
||||
traitlets==5.0.5
|
||||
ujson==4.0.2
|
||||
wcwidth==0.2.5
|
||||
webencodings==0.5.1
|
||||
widgetsnbextension==3.5.1
|
||||
argon2-cffi
|
||||
async-generator
|
||||
attrs
|
||||
backcall
|
||||
bleach
|
||||
cffi
|
||||
cycler
|
||||
debugpy
|
||||
decorator
|
||||
defusedxml
|
||||
entrypoints
|
||||
graphviz
|
||||
ipykernel
|
||||
ipython
|
||||
ipython-genutils
|
||||
ipywidgets
|
||||
jedi
|
||||
Jinja2
|
||||
jsonschema
|
||||
jupyter
|
||||
jupyter-client
|
||||
jupyter-console
|
||||
jupyter-core
|
||||
jupyterlab-pygments
|
||||
jupyterlab-widgets
|
||||
kiwisolver
|
||||
MarkupSafe
|
||||
matplotlib
|
||||
matplotlib-inline
|
||||
mistune
|
||||
nbclient
|
||||
nbconvert
|
||||
nbformat
|
||||
nest-asyncio
|
||||
networkx
|
||||
notebook>=6.4.1
|
||||
numpy
|
||||
packaging
|
||||
pandocfilters
|
||||
parso
|
||||
pexpect
|
||||
pickleshare
|
||||
Pillow
|
||||
prometheus-client
|
||||
prompt-toolkit
|
||||
ptyprocess
|
||||
pycparser
|
||||
Pygments
|
||||
pyparsing
|
||||
pyrsistent
|
||||
python-dateutil
|
||||
pyzmq
|
||||
qtconsole
|
||||
QtPy
|
||||
scipy
|
||||
Send2Trash
|
||||
six
|
||||
terminado
|
||||
testpath
|
||||
tornado
|
||||
traitlets
|
||||
ujson
|
||||
wcwidth
|
||||
webencodings
|
||||
widgetsnbextension
|
||||
|
After Width: | Height: | Size: 267 KiB |
After Width: | Height: | Size: 267 KiB |
After Width: | Height: | Size: 268 KiB |
After Width: | Height: | Size: 35 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 44 KiB |
Before Width: | Height: | Size: 176 KiB After Width: | Height: | Size: 176 KiB |
After Width: | Height: | Size: 64 KiB |
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 105 KiB |
After Width: | Height: | Size: 56 KiB |
@ -0,0 +1,8 @@
|
||||
[[part_1]]
|
||||
[part]
|
||||
== Understanding The Lightning Network
|
||||
|
||||
[partintro]
|
||||
--
|
||||
An overview of the Lightning Network suitable for anyone interested in understanding the basic concepts and use of the Lightning Network
|
||||
--
|
@ -0,0 +1,8 @@
|
||||
[[part_2]]
|
||||
[part]
|
||||
== The Lightning Network in detail
|
||||
|
||||
[partintro]
|
||||
--
|
||||
A detailed explanation of all the components of the Lightning Network and how they work. This part is highly technical and expects the reader to have some programming and computer science experience.
|
||||
--
|
@ -0,0 +1,37 @@
|
||||
from __future__ import print_function
|
||||
import glob
|
||||
import re
|
||||
|
||||
markup_files = glob.glob('*.asciidoc')
|
||||
anchor_re = re.compile("\[\[(.*)\]\]")
|
||||
ref_re = re.compile(".*\<\<([^\>]*)\>\>.")
|
||||
|
||||
refs = []
|
||||
anchors = []
|
||||
dup_anchors = []
|
||||
|
||||
for markup_file in markup_files:
|
||||
markup_f = open(markup_file, 'r')
|
||||
markup_contents = markup_f.read()
|
||||
markup_f.close()
|
||||
for line in markup_contents.splitlines():
|
||||
ref_match = ref_re.match(line)
|
||||
if ref_match:
|
||||
if ref_match.group(1) not in refs:
|
||||
refs.append(ref_match.group(1))
|
||||
anchor_match = anchor_re.match(line)
|
||||
if anchor_match:
|
||||
if anchor_match.group(1) not in anchors:
|
||||
anchors.append(anchor_match.group(1))
|
||||
else:
|
||||
dup_anchors.append(anchor_match.group(1))
|
||||
|
||||
print("\nAnchors: ", len(anchors))
|
||||
print("\nDuplicated Anchors: ", len(dup_anchors))
|
||||
print(dup_anchors)
|
||||
print("\nReferences: ", len(refs))
|
||||
print(refs)
|
||||
broken_refs = list(set(refs) - set(anchors))
|
||||
print("\nBroken references: ", len(broken_refs), broken_refs)
|
||||
missing_refs = list(set(anchors) - set(refs))
|
||||
print("\nUn-referenced Anchors: ", len(missing_refs), missing_refs)
|