This release marks a new major release of
lnd that includes several important bug fixes, numerous performance optimizations, static channel backups (SCB), reduced bandwidth usage for larger nodes, an overhaul of the internals of the autopilot system, and a new batch sweeping sub-system. Due to the nature of some of the bug fixes which were made during the implementation of the new SCB feature, users are highly encouraged to upgrade to this new version.
This version includes a single migration to modify the message store format, used to send messages to remote peers reliably when attempting to construct channel proofs. The migration should appear as below:
2019-04-03 22:35:44.596 [INF] LTND: Version: 0.6.0-beta commit=v0.6-beta-rc4, build=production, logging=default
2019-04-03 22:35:44.596 [INF] LTND: Active chain: Bitcoin (network=mainnet)
2019-04-03 22:35:44.597 [INF] CHDB: Checking for schema update: latest_version=8, db_version=7
2019-04-03 22:35:44.597 [INF] CHDB: Performing database schema migration
2019-04-03 22:35:44.597 [INF] CHDB: Applying migration #8
2019-04-03 22:35:44.597 [INF] CHDB: Migrating to the gossip message store new key format
2019-04-03 22:35:44.597 [INF] CHDB: Migration to the gossip message store new key format complete!
Verifying the Release
In order to verify the release, you'll need to have
gpg2 installed on your system. Once you've obtained a copy (and hopefully verified that as well), you'll first need to import the keys that have signed this release if you haven't done so already:
curl https://keybase.io/roasbeef/pgp_keys.asc | gpg --import
Once you have his PGP key you can verify the release (assuming
manifest-v0.6-beta-rc4.txt.sig are in the current directory) with:
gpg --verify manifest-v0.6-beta-rc4.txt.sig
You should see the following if the verification was successful:
gpg: assuming signed data in 'manifest-v0.6-beta-rc4.txt'
gpg: Signature made Thu Apr 11 16:36:48 2019 PDT
gpg: using RSA key F8037E70C12C7A263C032508CE58F7F8E20FD9A2
gpg: Good signature from "Olaoluwa Osuntokun <email@example.com>" [ultimate]
That will verify the signature on the main manifest page which ensures integrity and authenticity of the binaries you've downloaded locally. Next, depending on your operating system you should then re-calculate the
sha256 sum of the binary, and compare that with the following hashes (which are included in the manifest file):
One can use the
shasum -a 256 <file name="" here=""> tool in order to re-compute the
sha256 hash of the target binary for your operating system. The produced hash should be compared with the hashes listed above and they should match exactly.
Finally, you can also verify the tag itself with the following command:
git verify-tag v0.6-beta-rc4
Building the Contained Release
With this new version of
lnd, we've modified our release process to ensure the bundled release is now fully self contained. As a result, with only the attached payload with this release, users will be able to rebuild the target release themselves without having to fetch any of the dependencies. Note that at this stage, binaries aren't yet fully reproducible (even with
go modules). This is due to the fact that by default, Go will include the full directory path where the binary was built in the binary itself. As a result, unless your file system exactly mirrors the machine used to build the binary, you'll get a different binary, as it includes artifacts from your local file system. This will be fixed in
go1.13, and before then we may modify our release system to do this automatically.
In order to re-build from scratch, assuming that
lnd-source-v0.6-beta-rc4.tar.gz are in the current directory:
tar -xvzf vendor.tar.gz
tar -xvzf lnd-source-v0.6-beta-rc4.tar.gz
GO111MODULE=on go install -v -mod=vendor -ldflags "-X github.com/lightningnetwork/lnd/build.Commit=v0.6-beta"
GO111MODULE=on go install -v -mod=vendor -ldflags "-X github.com/lightningnetwork/lnd/build.Commit=v0.6-beta" ./cmd/lncli
-mod=vendor flag tells the
go build command that it doesn't need to fetch the dependencies, and instead, they're all enclosed in the local vendor directory.
Additionally, it's now possible to use the enclosed
release.sh script to bundle a release for a specific system like so:
LNDBUILDSYS="linux-arm64 darwin-amd64" ./release.sh
release.sh script will now also properly include the commit hash once again, as a regression caused by a change to the internal build system has been fixed.
⚡️⚡️⚡️ OK, now to the rest of the release notes! ⚡️⚡️⚡️
Protocol and Cross-Implementation Compatibility Fixes
We’ll now properly validate our own announcement signatures for
NodeAnnouncements before writing them to disk and propagating them to other peers.
A bug has been fixed causing us to send an
FinalFailExpiryTooSoon error rather than a
FinalFailIncorrectCltvExpiry when the last HTLC of a route has an expiration height that is deemed too soon by the final destination of the HTLC.
Aliases received on the wire are now properly validated. Additionally, we’ll no longer disconnect peers that send us invalid aliases.
A bug has been fixed that would at times cause commitments to desynchronize in the face of multiple concurrent updates that included an
UpdateFee message. The fix generalizes the existing commitment state machine logic to treat an
UpdateFee message as we would any other
We’ll now reject funding requests that require an unreasonable confirmation depth before the channel can be used.
We’ll now space out our broadcast batches more in order to save bandwidth and consolidate more updates behind a single batch.
We’ll now require all peers we connect to, to have the DLP (Data Loss Protection) bit set. This is required for the new SCB (Static Channel Backups) to function properly.
For private channels, we’ll now always resend the latest
ChannelUpdate to the remote peer on reconnecting. This update is required to properly make invoices with hop hints which are required for receiving over a non-advertised channel.
Reject and Channel Caches
A number of internal caches have been added to reduce memory idle memory usage with a large number of peers, and also reduce idle CPU usage due to stale channel updates.
In this release,
lnd now maintains a small reject cache for detecting stale ChannelAnnouncment and ChannelUpdate messages from its peers. Prior versions of
lnd would perform a database lookup for each incoming messages, which produced a huge amount of contention under load and as the channel graph exploded.
The reject cache maintains just 17 bytes per edge, and easily holds today's graph in memory. Users on low power devices or with a large number of peers will benefit immensely from
lnd's improved ability to filter gossip traffic for the latest information and clear large backlogs received from their peers.
The number of items in the cache is configurable using the
--caches.reject-cache-size flag. The default value of 50,000 comfortably fits all known channels in the reject cache, requiring 1.2MB.
Additionally, we now maintain a separate channel cache, which contains in-memory copies of ChannelAnnouncements, ChannelUpdates, and NodeAnnouncements for a given channel. This cache is used to satisfy queries in hot paths of our peers’ gossip queries, allow us to serve more responses from memory and perform fewer database reads and allocations in deserialization.
The size of the channel cache is also configurable via the
--caches.chan-cache-size flag. The default value of 20,000 stores about half of all known channels in memory and constitutes about 40MB.
Graceful Shutdown via
It was discovered that prior versions of
lnd didn’t attempt to catch the
SIGTERM signal to execute a graceful shutdown. When possible, users should prefer to shutdown
lnd gracefully via either
SIGINT to ensure the database is closed and any outstanding transactions committed in order to avoid database corruption. Commonly used process management systems such as Docker or systemd typically send
SIGTERM, then wait for a period of time to allow the process to respond before forcefully killing the process. Before this release,
lnd would always be forcefully killed by these platforms, rendering it unable to properly execute a graceful shutdown.
This new release of
lnd will now properly catch these signals to ensure that we’re more likely to be able to execute a graceful shutdown. We believe that many reports of partial database corruption typically reported by those running on Raspberry Pi’s should be addressed by this change.
Static Channel Backups
In this release, we’ve implemented a new safe scheme for static channel backups (SCB's) for
lnd. We say safe, as care has been taken to ensure that there are no foot guns in this method of backing up channels, vs doing things like
rsyncing or copying the
channel.db file periodically. Those methods can be dangerous as one never knows if they have the latest state of a channel or not. Instead, we aim to provide a simple safe instead to allow users to recover the settled funds in their channels in the case of partial or complete data loss. The backups themselves are encrypted using a key derived from the user's seed, this way we protect the privacy of the users channels in the back up state, and ensure that a random node can't attempt to import another user's channels. WIth this backup file, given their seed and the latest back up file, the user will be able to recover both their on-chain funds, and also funds that are fully settled within their channels. By "fully settled" we mean funds that are in the base commitment outputs, and not HTLCs. We can only restore these funds as right after the channel is created, we have all the data required to make a backup.
We call these “static” backups, as they only need to be obtained once for a given channel and are valid until the channel has been closed. One can view this backup as a final method of recovery in the case of total data loss. It’s important to note that during recovery the channels must be closed in order to recover the funds fully. This set up ensures that there’s no way to incorrectly uses an SCB that would result in broadcast of a revoked commitment state. Recovery documentation for both on-chain and off-chain coins can be found here.
Backup + Recovery Methods
The SCB feature exposes multiple safe ways to backup and recover a channel. We expect only one of them to be used primarily by unsophisticated end users, but have provided other mechanisms for more advanced users and business that already script
lnd via the gRPC system.
First, the easiest method for backup+recovery.
lnd now will maintain a
channels.backup file in the same location that we store all the other files. Users will at any time be able to safely copy and backup this file. Each time a channel is opened or closed,
lnd will update this file with the latest channel state. Users can use scripts to detect changes to the file, and upload them to their backup location. Something like
fsnotify can notify a script each time the file changes to be backed up once again. The file is encrypted using an AEAD scheme, so it can safely be stored plainly in cloud storage, your SD card, etc. The file uses a special format and can be used to import via any of the recovery methods described below.
The second mechanism is via the new
SubscribeChanBackups steaming gRPC method. Each time an channel is opened or closed, you'll get a new notification with all the
chanbackup.Single files (described below), and a single
chanbackup.Multi that contains all the information for all channels.
Finally, users are able to request a backup of a single channel, or all the channels via the cli and RPC methods. Here's an example, of a few ways users can obtain backups:
⛰ lncli --network=simnet exportchanbackup --chan_point=29be6d259dc71ebdf0a3a0e83b240eda78f9023d8aeaae13c89250c7e59467d5:0
⛰ lncli --network=simnet exportchanbackup --all
⛰ lncli --network=simnet exportchanbackup --all --output_file=channels.backup
⛰ ll channels.backup
-rw-r--r-- 1 roasbeef staff 381B Dec 9 18:16 channels.backup
SCBs can be viewed as a last ditch method for recovering funds from channels due to total data loss. In future releases, we plan to implement methods that require more sophistication with respect to operational architecture, yet allow for dynamic backups. Even with these dynamic backups in place, SCBs will still serve as a fallback method if a dynamic back up may be known to be out of date, or in a partial state of consistency.
Future protocol changes will make the SCB recovery method more robust, as it will no longer rely on the remote peer to send the normal channel reestablishment handshake upon reconnection. Instead, given the SCB,
lnd will be able to find the closing output directly on the chain after a force close by the remote party.
For further details w.r.t the lower level implementation of SCBs as well as the new RPC calls, users can check out the new
recovery.md file which goes over methods to recover both on-chain and off-chain funds from
New Channel Status Manager
Within the protocol, nodes can mark a channel as enabled or disabled. A dsiable channel signals to other nodes that the channel isn’t to be used for routing for whatever reason. This allows clients to void these channels during path finding, and also lets routing nodes signal any faults in a channel to other nodes allowing them to ignore them and possibly remove them from their graph view.
lnd has a system to automatically detect when a channel has been inactive for too long, and disable it, signalling to other peers that they can ignore it when routing. The system will also eventually re-enable a channel if it has been stable for long enough.
The prior version of sub-system had a number of flaws which would cause channels to be excessively enabled/disabled, causing
ChannelUpdate spam in the network. In this release, this system has been revamped, resulting in a much more conservative, stable channel status manager. We’ll now only disable channels programmatically, and channels will only be re-enabled once the peer is stable for a long enough period of time. This period of time is now configurable.
Server and P2P Improvements
The max reconnection back off interval is now configurable. We cap this value by default to ensure we don’t wait an eternity before attempting to reconnect to a peer. However, on laptops and mobile platforms, users may want to value to be much lower to ensure they maintain connectivity in the face of roaming, or wi-ifi drops. The new field is:
--maxbackoff=. A new complementary
--minbackoff field has also been added.
We’ll now attempt to retry when faced with a write timeout rather than disconnect the peer immediately. This serves to generally make peer connections more stable to/from
Users operating larger
lnd nodes may find that at times restarts can be rather load heavy due to the rapid burst of potentially hundreds of new p2p connections. In this new version of
lnd, we’ve added a new flag (
--stagger-initial-reconnect) to space out these connection attempts by several seconds, rather than trying to establish all the connections at once on start up.
Outgoing Message Queue Prioritization
[A new distinct queue of gossip messages has been added to the outgoing write queue system within
lnd](https://github.com/lightningnetwork/lnd/pull/2690. We’ll now maintain two distinct queues: messages for gossip message, and everything else. Upon reconnection, certain messages are time sensitive such as sending the Channel Reestablishment message which causes a channel to shift from active to inactive. This queue optimization also means that making new channels, or updating existing channels will no longer be blocked by any outgoing gossip traffic, improving the quality of service.
Batched Pre-Image Writing in the HTLCSwitch
This new release will now batch writes for witnesses discovered in HTLC forwarding. At the same time, we correct a nuanced consistency issue related to a lack of synchronization with the channel state machine. Naively, forcing the individual preimage writes to be synchronized with the link incurs a heavy performance penalty (about 80% in profiling). Batching these allows us to minimize the number of db transactions required to write the preimages, allowing us to reinsert the batched write into the link's critical path and resolve the possible inconsistency. In fact, the benchmarks actually showed a slight performance improvement, even with the extra write in the critical path.
Unified Global SigPool
lnd uses a pool of goroutines that are tasked with signing and validating commitment and HTLC signatures for new channel updates. This pool allows us to process these commitment updates in parallel, rather than in a serial manner which would reduce payment throughput. [Rather than using a single
SigPool per channel, we now use a single global
SigPool](https://github.com/lightningnetwork/lnd/pull/2329_. With this change, we ensure that as the number of channels grows, the number of goroutines idling in the sigPool stays constant. It's the case that currently in the daemon, most channels are likely inactive, with only a handful actually consistently carrying out channel updates. As a result, this change should reduce the amount of idle CPU usage, as we have less active goroutines in select loops.
Read and Write Buffer Pools
In this release, we implement a write buffer pool for LN peers. Previously, each peer object would embed a 65KB byte array, which is used to serialize messages before writing them to the wire. As a result, every new peer causes a large memory allocation, which places unnecessary burden on the garbage collector when faced with short-lived or flapping peers. We’ll now use a buffer pool, that dynamically grows and shrinks based on the demand for write buffers corresponding to active peers. This greatly helps when there is a high level of churn in peer activity, or even if there is a single one flapping peer.
Similarly, whenever a new peer would connect, we would allocate a 65KB+16 byte array to use as a read buffer for each connection object. The read buffer stores the ciphertext and MAC read from the wire, and used to decrypt and then decode messages from the peer. Because the read buffer is implemented at the connection-level, as opposed to the peer-level like write buffers, simply opening a TCP connection would cause this allocation. Therefore peers that send no messages, or do not complete the handshake, will add to this memory overhead even if they are released promptly. To avoid this, we now use a similar read buffer pool to tend towards a steady working set of read buffers which drastically reduces memory usage.
Finally, we introduce a set of read/write worker pools, which are responsible for scheduling access to the read/write buffers in the underlying buffer pools. With the read and write pools, we modify the memory requirements to be at most linear in the number of specified workers. More importantly, these changes completely decouple read and write buffer allocations from the peer/connection lifecycle, allowing
lnd to tolerate flapping peers with minimal overhead.
Nodes that have a large number of peers will see the most drastic benefit. In testing, we were able to create stable connections (w/o gossip queries) to over 900 unique nodes, all while keeping
lnd's total memory allocations due to read/write buffers under 15 MB. This configuration could have easily connected to more nodes, though that was all that reachable via the bootstrapper.
This same test would have used between 90-100MB on master, and continues to grow as more connections are established or peers flap if the garbage collector could not keep up. In contrast, the memory used with read/write pools remains constant even as more peers are established.
A new sweeper subsystem has been introduced. The sweeper is responsible for sweeping mature on-chain outputs back to the wallet. It does so by combining sets of outputs in a single transaction per block. It takes care not to sweep outputs that have a negative yield at the current fee estimate. Those will be left until the fee estimate has decreased enough. Some outputs may still be contested and possibly swept by the remote party. The sweeper is aware of this and properly reports the outcome of the sweep for an output to other subsystems. sweep: create sweeper.
The new Sweeper sub-system is the start of a generalized transaction batching engine within
lnd. As is today, it will batch all sweeps (HTLC timeouts, commitment sweeps, CSV sweeps) across
lnd into a single transaction per block. In the future, the sweeper will be generalized in order to implement fee bumping techniques like RBF and CPFP in a single logical unit. Additionally, the existence of such a batching engine will allow us to batch all transaction daemon wide into a single transaction, which will allow us to implement block saving features such as: opening multiple channels in a single transaction, combining cross channel splice in/outs, closing out one channel in order to open a new channel or fulfill a request payment. payment.
Overtime the sweeper will also grow to obsolete the existing
UtxoNursery as sweep requests will become more distributed (an HTLC asks to be swept rather than the nursery sweeping when the time is right).
Graph Sync Improvements
With the recent rapid growth of the network, it almost became unbearable for nodes to sync their routing table with their peers due to the huge number of updates/channels being announced. We’ve made significant improvements towards addressing this issue with the introduction of the
SyncManager. Nodes will now only receive new graph updates from 3 peers by default. This number has been exposed as a CLI flag,
—numgraphsyncpeers, and can be tuned for light clients and routing nodes for bandwidth savings. In testing, we’ve seen over a 95% bandwidth reduction as a result of these changes.
This version also reduces the batch size of channels requested via
QueryShortChanIDs from 8000 to 500, leading to more stability in large or initial syncs. The previous version was found to invite disconnections from the remote peer once the receiver had received the first few thousand messages. The reduced batch size prevents us from overflowing our own internal queues for gossip messages, and ensuring the remote peer doesn’t interpret this as jammed connection.
Goodbye Zombie Channels
Within the last couple months, we started to experience a large number of zombie channels in the network being gossiped between nodes. A zombie channel is a channel that is still open, but hasn’t been updated for 2 weeks. This issue was also present on testnet a few years back, so we’ve finally addressed the issue for
good. Nodes will now maintain an index of zombie channels which they can query to determine whether they should process/forward announcements for an arbitrary channel.
Using this index, we will also refrain from requesting channels we know to be zombies from peers that think otherwise. At the time of writing, there are roughly 3.3k zombie channels on mainnet. This optimization saves us from requesting 10k individual messages, amounting to roughly 3MB when attempting historical syncs with peers presenting
lnd with zombie channels.
On-Chain Commitment and HTLC Handling
A bug has been fixed that would previously cause an HTLC which was settled on chain to not properly be marked as settled..
An off-by-one-error has been fixed in the contract court when handling a remote commitment close due to a DLP execution instance. This ensures that funds will now properly be swept in the case of a force close due to DLP wherein the remote party is one state ahead of ours. Users that ran into this issue in the wild should find that the dispatch logic is re-executed, resulting in on-chain funds properly being swept back into the wallet.
Bitcoind Spend Hint Bug Fix
Fixes a bug that would cause bitcoind backends to perform historical rescans on successive restarts, even if the first rescan completed and did not find a spending transaction. Affected nodes will have to complete one more rescan after upgrading before symptoms will disappear. In more severe cases, this will save tens of thousands of getblocks calls to the backend on each restart.
Autopilot Architecture Revamp
In this release, as a prep for more advanced autopilot heuristics in a future release, we’ve completely revamped the way the system works. Before this release, the autopilot “agent” was directive based, meaning that it when queried, it would simply say “connect to these nodes”. This detective based suggestion was simple, yet limiting in that: it didn’t easily lend to combining multiple heuristics and using only the dertive model, there isn’t a clear way of comparing to distinct heuristics.
The new system instead implements a scoring based agent. Rather than simply suggesting a set of node to connect to, the agent will now return a set of scores for a target, or all peers within the network. This score is then incorporated into the main channel selection loop, adding a bit of jitter to ensure diversity. The scoring based system really shines when you start to consider adding multiple heuristics that work in tandem (connectivity optimized, uptime optimized, reliability optimized, redundancy optimized, etc). With the new scoring system, it’s now possible to create a new heuristics which is actually a combination of several sub heuristics. As an example, we’ve created a new
WeightedCombAttachment heuristics that outputs a linear combination of the scores of a set of registered hueirticis.
This new scoring based system will pave the road for more advanced autopilot heuristics which may make certain trade offs in order to target specific use cases like: mobile/laptop oriented, net receiver (merchant, etc) optimized, routing network robustness (min-cut and the like). As a bonus, the new system also makes it much easier to add a new heuristic as the new interface has a single method:
NodeScores: given the graph, target channel size, and existing set of node channel it should return a score for all non-filtered out nodes.
Penalize small channels for autopilot scoring
WIth rearchitechting autopilot to be scoring based, the default heuristic (prefattach) will now ]decrease the score of nodes having a large number of small channels.](https://github.com/lightningnetwork/lnd/pull/2797)
New Sweep All Coins Command
A new argument has been added to the
lncli sendcoins interface to allow users to sweep all coins from
lnd's wallet to a target on-chain address. An example run of the new command looks something like:
⛰ lncli --network=simnet sendcoins --sweepall --addr=sb1qsy8772pkfucsvmuyw82gexyd4u69pvve9w98v3
For those using the RPC interface, the new field to set is
send_all, which is a boolean that indicates the amount is left blank, and all coins should be sent to that target address.
RPC Interface Enhancements and Fixes
The default number of routes returned from
QueryRoutes is now 10 (prior the default was unspecified).
QueryRoutes with used with more than one route target has been deprecated and will be phased out in future versions of
lnd. In order to make up for the lost functionality, we’ve added a series of new arguments to the RPC call that allow users to ignore an arbitrary set of edges or vertexes. This new feature makes it easier to implement things like rebalancing externally, as you can know modify the source node for path finding, which can be used to find a path from node A to B, then back from B to A that must travel in/out of a specific edge set.
SignMessage now properly expose a REST endpoint.
The response to
SendPayment now also includes the payment hash in order to make it easy to associate a success or failure amongst several payments when using the streaming RPC.
A number of fixes to the request validation within the RPC server have been made. These changes make the server more defensive w.r.t what it accepts from clients.
Invoices created with the
--private option (include hop hints to private channels) are [now marked as such on the RPC interface}(https://github.com/lightningnetwork/lnd/pull/2222).
A new RPC call
ListUnspent has been added to allow users to examine the current UTXO state of
lnd. Combined with the new Signer Sub-Server, users can use this to craft arbitrary transactions using
lnd’s backing keystore.
A bug has been fixed that would cause channels that are unconfirmed, but waiting to be closed from being returned via the
A bug has been fixed that wouldn’t allow users to expose the REST interface on all network interfaces.
settled field from the
Invoice proto is now deprecated. Instead, the new
state field is to be used as it allows us to reflect additional states of invoices (open, settled, cancelled, etc).
A number of fields in the
Invoice proto that have never been populated /used have been removed.
The name of the network is now exposed in the response of the
UnsettledBalance field in the
PendingChannels RPC response is now properly set.
ListChannels response now includes a field which denotes if the node is the iniatitor of a channel or not.
HTLCs which haven’t yet timed out, are now properly shown in the output of
SubscribeChannels RPC has been added to allow clients to be notified whenever a channel becomes inactive, active, or closed. This is useful for any type of application that would otherwise need to poll the channel state to keep up to date on what channels are active, inactive, or closed.
Two new address types have been added to the
NewAddress RPC call. These address types will return the same address until they have been used, then rotate to a new address. These new address types are useful for displaying a new address in UIs without running into “address inflation”.
getnetworkinfo RPC now also returns the median channel size of the graph. The average degree output in
GetNetworkInfo has also been corrected.
A bug that would cause the autopilot agent to over-allocate funds if multiple channels were opened in parallel has been fixed.
We’ll now retrieve the
chan_id of an open channel from the channel database when using the
listchannels lncli command, rather than the graph. If a channel doesn’t have a new update within the last 2 weeks, then it’ll be pruned from the graph, which caused the
chan_id lookup to fail and result in a
0 value being displayed.
In this new release, we’ve begun the process of slowly evolving the RPC interface via the new Sub-Server system. The gRPC system allows multiple independent services to be registered to the same endpoint. Before this release,
lnd had one primary service:
Lightning. All current RPC calls are directed to this unified service. In the early days of
lnd, this structure emerged organically as many RPCs were added based on speculative future uses, or primarily for the purposes of testing new features added to the codebase. The result today is one mega interface, without any clear specialization or feature delineation.
Since the initial release of
lnd, we’ve received a considerable amount of valuable feedback w..r the RPC interface from developers, businesses, and node operators that use the interface daily. Some of this feedback may require us to extensively re-work core RPC calls like
SendPayment. Doing so directly in the main service would be disruptive as the calls may change over night, or have their behavior be drastically modified. We consider Sub-Servers to be a solution to this issue as they allow us to recreate a small subset of the existing RPC interface in a concentrated, methodical manner. By being able to start from scratch, we gain more freedom w.r.t crafting the new interface. Additionally, by being forced to examine a smaller subset of the total functionality in a new Sub-Server, we’re able to consolidate existing code, decouple the RPC interface from the rest of
lnd, and also expose new functionality to the RPC interface that may only be tangentially related to Lightning.
As of this release, all sub-servers are guarded behind special build flags (
make install -tags=<buildtag>). The rationale here is that the sub-servers only expose new functionality, so existing users of
lnd that don’t yet have a need for these new features shouldn’t be burdened with them at runtime. Over time as the interfaces crystalize more, we’ll begin the process of depreciating certain older RPCs in order to promote the newer more design sound Sub-Server RPCs. As a result, the current Sub-Server interfaces should be considered non final and subject to change at anytime. Due to their volatile nature, we don’t yet have documentation up at api.lightning.community. On their application development side of things, using a new sub-server is as simple as creating a new gRPC client service with the existing gRPC client connection.
Sub-Servers also make
lnd generally more useful as a one-stop shop for any sort of Bitcoin related programming or application as they expose some core interfaces that
lnd uses across the codebase to accomplish routine tasks. When compiled in, certain sub-servers will also augment
lncli with a set of new commands. The current set of Sub-Servers (and their respective build tags) include:
ChainNotifier Sub-Server is a utility toolkit responsible for requesting the chain backend for notifications about the tip of the chain, transaction confirmations, and output spends. It also includes support for requesting these notifications for arbitrary output scripts as well.
WalletKit Sub-Server is a utility toolkit that contains method which allow clients to perform common interactions with a wallet such as getting a new address, or sending a transaction. It also includes some supplementary actions such as fee estimation. Combined with the Signer Sub-Server, this lets users create arbitrary transactions (like CoinJoins!) using the existing set of private keys under the control of
Signer Sub-Server exposes the existing
input.Signer interface within
lnd as an accessible sub-server. The existence of this sub-servers also opens up the possibility of having the actual signer and signing code existing outside of
lnd taking the form of either a distinct process, or remote server with additional access control mechanisms.
Invoices Sub-Sever exposes a number of new ways to interact with invoices that don’t exist in the existing invoice related calls for the main service. A new type of invoice called ‘hodl invoice’ has been added. Instead of immediately locking in and settling the htlc when the payment arrives, the htlc for a hodl invoice is only locked in and not yet settled. At that point, it is not possible anymore for the sender to revoke the payment, but the receiver still can choose whether to settle or cancel the htlc and invoice htlcswitch: hodl invoice.
The new invoice function
CancelInvoice has been implemented.
CancelInvoice can be called on a hodl invoice, but also on a regular invoice. It makes the invoice unpayable invoices: CancelInvoice.
A last improvement to the invoices subsystem is the ability to subscribe to updates of a single invoice instead of receiving all invoice updates invoices: add subscribesingleinvoice.
Autopilot Sub-Server allows users to programatically drive certain aspects of the
autopilot system. Before this new Sub-Server, the only way to modify the settings of
autopilot were to modify command line parameters and restart the system. This new Sub-Server allows users to turn autopilot on/off without restarting, and also query the score of a prospective node with the new query interface.
Router Sub-server presents a simplified interface for sending payments off-chain, and also getting a fee estimate for potential off-chain payments. In future releases, we’ll begin to revamp the off-chain sending interface in order to give users more control w.r.t when we start/stop attempting to fulfill a payment attempt, and also more transparency w.r.t the state of an initiated off-chain payment.
lnd will now allow a user to access the Tor daemon with
NULL authentication. Additionally, it’s now possible to listen on a distinct interface that isn’t localhost when running in auto hidden service mode. This allows users that are aware of the implications to run in a hybrid mode that accepts both inbound clearnet and hidden service connections. Additionally, users can now listen on an arbitrary interface if they have outbound Tor configured.
We’ll now start syncing headers and filter headers as soon as
lnd’s wallet is created/unlocked. This greatly improves the user experience by reducing the amount of time to reach a fully synced light client
We’ll reliably broadcast transactions to our bitcoin peers to ensure they propagate throughout the network.
Library Enhancements and Multi-Module Support
This release begins some efforts towards transitioning into a multi-module repository. This allows specific packages within
lnd to be used externally without the need of duplicating code or running into import cycles.
WriteElements methods from
lnwire are now exposed publicly. This allows any Go program to easily be able to serialize structs/data using the codec described in the BOLT documents.
Wallet Bug Fixes and Improvements
The wallet will no longer rescan from its birthday if it has no UTXOs.
The wallet will now properly remove transactions from its persistent store that the chain backend deems as
The wallet will now properly remove transaction conflicts from its persistent store.
The default recovery window has been increased from 250 to 2500. This increase is meant to ensure that the typical wallet is able to complete the regular seed rescan/import without needing to increase the existing default look ahead value.
Two new optional restrictions have been added that influence route optimization for sending payments:
* Maximum route cltv time lock. Route optimization will be limited to routes that do not exceed the specified cltv limit routing: add cltv limit.
* Outgoing channel. Only routes that start with the specified channel will be considered routing: add outgoing channel restriction.
Furthermore, several new parameters have been added to the
QueryRoutes rpc call to allow more control over the returned route lnrpc: deprecate QueryRoutes with more than one route. Requesting multiple routes from
QueryRoutes based on the k-shortest algorithm has been deprecated. This behaviour can be re-implemented client side using the new
SendToRoute rpc call, the ability to specify multiple routes has been deprecated lnrpc: deprecate SendToRoute with more than one route.
The default CLTV delta for channels created by
lnd has been lowered from 144 blocks to 40 blocks. Future versions of
lnd will begin to automatically modify this parameter based on the sampled fee levels in the chain.
Breacharbiter Preparatory Work
A number of enhancements to the Breacharbiter have been made which are required for the ultimate watch tower implementation. These changes ensure that
lnd is able to continue to function if it isn’t the one that ends up sweeping all the outputs in the case of a breach (the tower might sweep the commitment outputs for example, and lnd sweeps the HTLCs itself).
Improvements have been made to the
PendingChannels report. Several categories of funds in limbo that were previously unreported have been added to the report rpc+contractcourt: merge the contractResolver state into the pendingchannels RPC response.
The full list of changes since
0.5.2-beta can be found here:
Contributors (Alphabetical Order)
Johan T. Halseth