This release marks the 5th major release release of
release marks a massive step in the robustness and reliability of
lnd as a
routing node daemon for the Lightning Network. Additionally, a number of
optimizations have been implemented which will reduce the memory and CPU
lnd making it more amendable to run on smaller devices like
Raspberry Pis and also eventually mobile phones! A number of bug fixes related
to reliable HTLC forwarding, persistence recovery, and path finding have also
landed in this release. As a result users should generally find path finding to
be a smoother experience, and should find that
lnd is able to recover from a
number of partial and complete failures in routine protocol exchanges.
0.5-beta release doesn’t include any strictly breaking changes. So a result, users should find the upgrade process to be smooth. If one is upgrading from 0.4.2, the initial starting logs should look something like:
2018-08-29 01:50:42.690 [INF] LTND: Version 0.5.0-beta commit=73af09a06ae9cd5ba92a376e8253ae5450fe09cc
2018-08-29 01:50:42.690 [INF] LTND: Active chain: Bitcoin (network=mainnet)
2018-08-29 01:50:42.925 [INF] CHDB: Checking for schema update: latest_version=5, db_version=0
2018-08-29 01:50:42.925 [INF] CHDB: Performing database schema migration
2018-08-29 01:50:42.925 [INF] CHDB: Applying migration #1
2018-08-29 01:50:43.100 [INF] CHDB: Populating new node update index bucket
2018-08-29 01:50:45.345 [INF] CHDB: Populating new edge update index bucket
2018-08-29 01:51:19.532 [INF] CHDB: Migration to node and edge update indexes complete!
2018-08-29 01:51:19.532 [INF] CHDB: Applying migration #2
2018-08-29 01:51:19.613 [INF] CHDB: Migrating invoice database to new time series format
2018-08-29 01:51:19.613 [INF] CHDB: Migration to invoice time series index complete!
2018-08-29 01:51:19.613 [INF] CHDB: Applying migration #3
2018-08-29 01:51:19.613 [INF] CHDB: Migrating invoice database to new outgoing payment format
2018-08-29 01:51:19.613 [INF] CHDB: Migration to outgoing payment invoices complete!
2018-08-29 01:51:19.613 [INF] CHDB: Applying migration #4
2018-08-29 01:51:57.457 [INF] CHDB: Migration of edge policies complete!
2018-08-29 01:51:57.457 [INF] CHDB: Applying migration #5
2018-08-29 01:51:57.458 [INF] CHDB: Migrating database to support payment statuses
2018-08-29 01:51:57.458 [INF] CHDB: Marking all known circuits with status InFlight
2018-08-29 01:51:57.458 [INF] CHDB: Marking all existing payments with status Completed
2018-08-29 01:51:57.458 [INF] CHDB: Migration of payment statuses complete!
lncli related change that users running on
testnet will notice is that the default location for macaroons has now changed. As a result,
lnd will generate a new set of macaroons after it has initially been upgraded. Further details will be found below, but
lnd will now generate a distinct set of macaroons for
mainnet. As a result, you may need to supply additional arguments for
lncli to have it work as normal on
testnet like so:
lncli --network=testnet getinfo
lncli --chain=litecoin --network=testnet getinfo
In order to cut down on the typing one needs to go through, we recommend creating an alias like so:
alias tlncli=lncli --network=testnet
NOTE: In this release, the
--noencryptwallet command line and config argument to
lnd has been phased out. It has instead been replaced with an argument identical in functionality, but distinct in naming:
--nowalletseed. The rationale for this change is to remove the foot gun that was the prior config value, as many users would unknowingly create mainnet nodes using the argument. This is dangerous, as if done, the user wouldn’t receive a recovery mnemonic to recover their on-chain funds in the case of disaster. We’ve changed the name of the argument to better reflect the underlying semantics.
Verifying the Release
In order to verify the release, you’ll need to have
gpg2 installed on your system. Once you’ve obtained a copy (and hopefully verified that as well), you’ll first need to import
roasbeef’s key if you haven’t done so already:
curl https://keybase.io/roasbeef/pgp_keys.asc | gpg --import
The keybase page of
roasbeef includes several attestations across distinct platforms in order to provide a degree of confidence that this release was really signed by “roasbeef”.
Once you have his PGP key you can verify the release (assuming
manifest-v0.5-beta-rc2.txt.sig are in the current directory) with:
gpg --verify manifest-v0.5-beta-rc2.txt.sig
That will verify the signature on the main manifest page which ensures integrity and authenticity of the binaries you’ve downloaded locally. Next, depending on your operating system you should then re-calculate the
sha256 sum of the binary, and compare that with the following hashes (which are included in the manifest file):
One can use the
shasum -a 256 <file name here> tool in order to re-compute the
sha256 hash of the target binary for your operating system. The produced hash should be compared with the hashes listed above and they should match exactly.
Finally, you can also verify the tag itself with the following command:
git verify-tag v0.5-beta-rc2
You should see the following if the verification was successful:
gpg: Signature made Wed Sep 5 21:41:45 2018 PDT
gpg: using RSA key 65317176B6857F98834EDBE8964EA263DD637C21
gpg: Good signature from "Olaoluwa Osuntokun <email@example.com>" [ultimate]
go1.12 is released, we’ll be switching to a method in order to allow deterministic builds to allow for third party verifiability of the binaries included in this release.
This release can also be found in
roasbeef’s public keybase folder.
⚡️⚡️⚡️ OK, now to the rest of the release notes! ⚡️⚡️⚡️
Switch to Mainline
With this release of
lnd, the project no longer uses roasbeef’s set of forks for the
btcsuite family of libraries such as
btcutil. The old set of forks will no longer be maintained as all development will now be focused on mainline
roasbeef is now a maintainer of the
btcsuite set of libraries. As a result, we’ll be able to easily integrate any new feature or bug fixes we need to
btcsuite directly, rather than maintaining our own fork again. As a result, we recommend that those users running
lnd with a
btcd upgrade to the latest versions of the master branch of
txindex For Full Node Backends is Now Optional!
Before this release, if a user was running with any of the supported full node backends we required them to run with the transaction index active. With this version of
lnd, running a full node backend with a transaction index is now optional! As a result, if a user wishes to run a lighter version of their full node without the transaction index, then they’re able to do so. However, for performance reasons until the persistent height hints are re-activated, we recommend running with an active transaction index for your full node which backs
lnd. In either case,
lnd will automatically detect if the backing full node has an active transaction index and act accordingly.
Future releases of
lnd will allow for even lighter full node configuration by supporting pruned nodes as a first class citizen.
Neutrino BIP 157+158 Compliance and Optimizations
This release of
lnd contains several bug fixes, and optimizations for the
neutrino light client backend. Additionally, our implementation of BIP 157 and BIP 158 are now fully compliant with the latest version of the set of BIPs. The primary change between this version of the BIP 158 and the prior version lies in exactly what the filters contain In a prior version, the regular filter contained: the txid of each transaction found in a block, the previous outpoint that all inputs spend, and finally the pkScript of each created output. The new version instead simply includes: the previous output script that each input references, and each pkScript created by outputs in the block. This modification results in more compact filters as we scripts can be de-duplicated across blocks, and we drop an additional element per transaction. Several core interfaces within
lnd have been revamped to listen for spends based on scripts (rather than outpoints) and confirmations based on scripts (rater than txids).
A re-write of the syncing logic for neutrino was undertaken in order to fix a number of stalling and performance related bugs in the prior implementation. At the time of writing of these release notes
btcd is the only full node implementation that is able to serve BIP 157 clients. The latest version of the master branch of
btcd has also been updated to be fully compliant with both BIP 157 and 158.
The latest version of the
neutrino implementation that’s packaged with lnd will now cache filters and blocks in memory. In prior versions of the implementation all filters would be written to disk. This is unnecessary, as in the typical case, a filter is only scanned and checked once, therefore it’s safe to never write them to disk and instead only maintain a simple in-memory cache with a size based eviction policy. Caching filters (with an option to write select filters to disk) allows us to reduce to reduce the on-disk footprint for the neutrino mode of operation. We’ll also now maintain a cache for blocks as during channel validation, it’s likely that a block contains several funding transactions. Caching these blocks allows us to cut down on redundant p2p traffic, instead utilizing a pre-deserialized version of a block for validation purposes.
Finally, a number of bugs have been fixed in the primary rescan logic for
neutrino which serves as a base abstraction for many components within lnd.
ControlTower has been integrated into the switch, which prevents payments exhibiting the reused payment hashes from being in-flight simultaneously, in addition to rejecting further attempts once a payment to a given hash is successful. By comparing the payment hashes directly, this also prevents paying two distinct invoices that include the same payment hash.
Query Graph Sync
With this version of
lnd, we now implement the “query graph syncing” feature which has recently been added to the BOLT specifications. With this change, establishing connections to new peers for a fresh node is much lighter. The primary distinction is that when requesting the network view of the node we’re connecting to, we’ll now longer request they send all the data they have. Instead, treating the blockchain and channels opened within it as a time series, we’re able to precisely request only the data we need, eliminating redundant bandwidth usage and processing on both sides.
As a result of this change, the load on routing nodes should generally be much lower, as they’ll only request new channels they don’t already know of from newly connected peers. We’ve taken an additional step forward, and now require this feature for nodes that the neutrino mode will connect out to. By doing this, we ensure the node we’re connecting to doesn’t send any zombie channels, causing us to populate our local network view with stale, likely abandoned channels.
Aggressive Graph Vertex Pruning
In order to maintain a healthy view of the network,
lnd currently prunes any channels which haven’t sent out a channel update heartbeat in 2 weeks. We call these pruned channels “zombie channels”. In this release, [we now go a step further and prune out any nodes which don’t have any active channels within the network](https://github.com/lightningnetwork/lnd/pull/1371. This serves to keep our view of the network tight and lively.
lnd currently keeps a special set of
LinkNodes within the database that represent nodes which we have direct channels with. In addition to this unconnected vertex pruning, we now ensure that we won’t automatically attempt to connect to a node on start up if we don’t have any existing channels to itself. In the past a lack of this feature has caused issues for larger nodes that have historically had a high channel turn over rate.
Async Daemon Start Up
In prior versions of
lnd a number of blocking actions such as re-registering for confirmation and spend notifications would slow down the start up time of the deamon as we would wait for things like historical dispatches to finish before moving onto the next sub-system to start up. Additionally, in prior release establishing a new peer connection was done in a synchronous manner, meaning that we would only be able to carry out a single p2p handshake at a time. In the worst case,
lnd would take tens of minutes to start up if the node was heavily loaded with channels.
In this new release we’ve modified all
ChainNotifier registrations to be fully async. As a result, we’ll no longer block for their historical dispatch checks on startup, and instead can pipeline the start up of all sub-systems within the daemon. On the server side, once we obtain a TCP socket, all other peer negotiation is now done in a distinct goroutine. These two changes should dramatically lower the initial start up time of the daemon for more heavily loaded nodes.
A new routing/payment related RPC has been added to lnd:
SendToRoute. The RPC can be seen as a companion RPC to the existing
QueryRoutes RPC. One can view this RPC as the Lightning analog to the
createrawtransaction RPC typically implemented within Bitcoin full node daemons. The
SendToRoute RPC allows a caller to specify a custom route, which includes all details required to dispatch an HTLC such as the fee and time lock information at each hop of the route. The RPC has been fashioned in a way that allows users to either re-use the existing output from the
QueryRoutes command, or craft a custom route by hand via a special JSON route format.
There are three ways to specify routes:
using the –routes parameter to manually specify a JSON encode set of routes in the format of the return value of queryroutes:
lncli sendtoroute --payment_hash=<pay_hash> --routes=<route>
passing the routes as a positional argument:
lncli sendtoroute --payment_hash=pay_hash <route>
- or reading in the routes from stdin, which can allow chaining the response from queryroutes, or even read in a file with a set of pre-computed routes:
lncli queryroutes --args.. | lncli sendtoroute --payment_hash=H -. Notice the ‘-’ at the end, which signals that lncli should read the route in from stdin
This was one of our most requested RPCs as it allows the caller to execute advanced maneuvers on the Lightning Network such as self-rebalancing channels, making custom protocols which rely on data delivered within the Sphinx per-hop onion blob, and also cross-chain atomic swaps which need to manually specify the a particular HTLC is to be forwarded on a distinct chain from that which it came in on.
Strict Local Forwarding Switch
In order to more precisely support the creation of self channel rebalancing scripts, we’ve modified the HTLC Switch to implement strict local forwarding. Before this change, when node had multiple channels to another node, and the first hop specified was meant to traverse that nodes links, the system would select the link with the highest available bandwidth. However, it may be the case that a users rebalancing script instead wishes to target a distinct channel. With strict forwarding however, we’ll ensure that we take the specified first hop rather than attempt to make a forwarding time decision using our additional information. Notably, we don’t do so for remote routes, as there’s no guarantee as to which link a node forwarding a remote HTLC will choose as there’s no way to enforce a particular action.
Automatic Channel Disable Policy
Within the protocol there exists a mechanism that allows nodes to “disable” a channel, marking it ineligible for carrying routed HTLC payments. Disabling channels that are faulty, inactive, or unable to route for w/e reason allows nodes on the network to have a better view of the “healthy” set of routable channels. The latest version of
lnd will now disable channels in two instances:
1. When we co-op close or force close a channel. This signals to the network that the channel is in the process of being closed on the main chain, and therefore isn’t eligible to route HTLCs. By sending out this disable update, we save the network a set of between the point of commitment broadcast, and the transaction being mined into the chain.
2. If the peer is unreachable for a period of time
T. The current default period is 20 miuntes, however this can be set from the command line via the
--inactivechantimeout= If a channel has been inactive for the set time, send a ChannelUpdate disabling it. (default: 20m0s)
These two measures should serve to reduce the number of failed routed HTLCs due to
UnknownNextPeer errors, and we start to tend towards a network view of nodes with high uptime and availability. This is a small step towards our goal of bootstrapping a network with reliable, highly available nodes.
Reduced Idle CPU Usage
Users operating more heavily-loaded routing nodes should generally perceive lower idle CPU usage time. A number of optimization have been executed which reduce the number of idle goroutines, the number of goroutines per connection/channel, and also the number of high frequency tickers within the codebase.
In 0.4.2, idle links would wake up every 50ms to check if they had any HTLCs to process. This caused wasteful CPU utilization, since we should only need to do so if there are unprocessed HTLCs. To remedy this, a new ticker package was implemented that allows the tickers to be stopped and resumed conditionally, based on the presence or absence of pending HTLCs. With 0.5, the idle CPU usage of active links has been reduced drastically since links will now be truly asleep when not processing HTLCs.
Automatic Tor V2+V3 Onion Services
Prior versions of
lnd introduced the ability to establish outbound connections over Tor via the socks proxy interface. This allows users to run routing nodes or clients without revealing the location of their routing nodes. This adds an additional layer of privacy as nodes no longer need to expose their IP address in order to route or send payments within the network. The latest version of
lnd takes things a step further and enables automatic provisioning of an onion service to allow a node to accept inbound connections over Tor.
The auto setup works as follows: if
--tor.active --tor.v2 is set within the configuration, then
lnd will attempt to automatically seek out and authenticate with the Tor daemon running at the specified control port. If we’re able to connect out, then we’ll create a new onion service identity, and modify
lnd to only listen on
localhost. In this mode, we also ensure that all DNS queries utilize the SOCKS5 interface for tunneling DNS over Tor. In this version we also support the new v3 onion services (
--tor.v3. The new onion service protocol represents a large step for the Tor network as it does away with the existing legacy crypto used within the system, and also strengthens onion services against a number of discovered attacks.
We’ll soon be updating our DNS seed to be able to crawl and serve onion service peers. This will allow those that wish to run purely over Tor to easily find peers they can connect to. For further documentation we recommend users check out our official Tor integration docs, as well as the relevant section of the
Dataloss Protection Recovery
Within the protocol, there exists a measure put in place that will allow nodes that have partially or complete lost data to recover a portion of the funds they had within active channels. We call this feature “dataloss protection”. The latest version of
lnd has now completely implemented this feature! In the rare case that users exhibit partial data loss, upon connection to a peer which we had a channel open with,
lnd will automatically prompt the user to close out the channel as it can no longer be used. At that point, we’ll then proceed to sweep out settled balance within the commitment transaction on-chain, and clean up the remaining channel state.
Future versions of
lnd, will finalize the ingratiation of this feature by also introducing static channel backups. These backups are essentially static files which represent a description of the channel, namely: the parameters used, location on chain, channel peer, key paths we used within the channel, etc. With this set up backups and a users seed, in the face of total data loss, we’ll be able to recover the settled balance in the set of open channels.
Reliability improvements to
Prior versions of
lnd were plagued with reliability issues when interacting with the
bitcoind backend. We’d at times miss notifications, or even drop block notification causing is to miss events such as a funding transaction confirming, or a channel being closed. With this new version of
lnd we’ve implemented several measures to ensure that we no longer miss any notifications from
bitcoind, and even if we do, then we’re able to safely backtrack and recover from any missed block notifications.
0.5, we would receive block and transaction notification via the same zmq socket. As transaction notifications (mempool inclusion) is much more common than block notifications, they would dominate the queued backlog at any given time. In certain conditions, due to the notification backlog, block notifications would be dropped once the queue gets above a high water mark. To avoid possibly missing block notifications, we now split the notification sources into two distinct sockets. This ensures that the less critical transaction notifications are isolated to a distinct queue from the block notifications. Due to this change, users must now specify distinct sockets for block and transaction notifications like so:
lnd --bitcoin.active --bitcoin.testnet --bitcoin.node=bitcoind --bitcoind.zmqpubrawblock=tcp://127.0.0.1:28332 --bitcoind.zmqpubrawblock=tcp://127.0.0.1:28333
HTLC Switch Persistence and Reliability Improvements
Removing links has been reworked to be blocking from the caller’s perspective, offering safer isolation during shutdown and interactions with flapping peers. When shutting down LND, stopping links is now done concurrently, offering faster shutdowns to users with high channel counts.
Prior to the added safety surrounding removal of links, some issues were found that caused users to end up with an invalid, albeit recoverable, database state. 0.5 includes a fix to automatically cleanup any databases that entered this state, which would otherwise prevent startup. The link startup logic has also been altered to ensure we don’t read from this invalid state.
The htlcswitch relies on a series of internal logs, referred to as forwarding packages, for ensuring that HTLCs are retransmitted internally with at-least-once semantics. An issue was fixed in 0.5 where a missing reference on failed packets would prevent the persistent references from being removed, and resulting in unnecessary internal retransmission and processing of the HTLCs on startup and peer reconnection. To correct databases that were not properly cleaning up this state, links will now cleanup any references for packets that are detected as duplicates internally. The combined result of both changes is reduced log spam and startup/reconnection latency.
Processing of locally-sourced HTLC responses has been made asynchronous, so that it does not block the primary forwarding loop within the switch resulting in better performance and database batching. The ordering of database operations has also been reordered to properly cleanup forwarding package references, even if the daemon has been restarted. References that had not been cleaned up prior will be cleaned up after a restart with 0.5.
On channel reestablisment, links will now force close the channel when detecting certain irrecoverable failure cases, such as remote data loss and invalid commitment points.
When sending locally-initiated payments out of the switch, we will now honor the exact channel requested by the channel router. Previously, we allowed the switch to select the best link to the same peer if multiple existed. However, this caused issues when trying to use
SendToRoute since the user couldn’t be sure which channel the payment would flow over. This change allows provides a more feedback mechanism to the router, since local failures are now known to have been sourced from the outgoing link.
Link-level fee updates are less aggressive, and now use randomized interval for each link. Prior all links checked for fee updates with each block, which resulted in an unnecessary number of state updates.
Timelocks of forwarded HTLCs are now validated against the outgoing channel policy. Previously, the CLTV was incorrectly compared to the policy of the incoming link, resulting in unnecessary routing failures.
An exit hop will now properly return FailFinalExpiryTooSoon when rejecting an HTLC whose timeout is too close to the current block height. The previous behavior incorrectly returned FailFinalIncorrectCltvExpiry, which should only be used if the timeouts are malformed.
Default Autopilot Improvements
The current autpoilot driving agent has received a number of updates in this new version.
t’s now possible to instruct the agent to only create unadvertised channels via the
autopilot.private flag. This will be useful for desktop and mobile clients which won’t be actively routing payments. When attempting to receive funds over these non-advertised channels, the
AddInvoice RPC will now automatically populate the required routing hints which will allow nodes to traverse these non-advertised channels on the “edge” of the graph.
One can now also specify the number of confirmation that outputs need to have before the agent starts to use them as inputs into channels. By specifying 0 confirmations, the agent is able to aggressively pipeline channel openings, resulting in a faster time-to-first-n-channels than prior versions.
Finally, the agent will now first probe nodes to see if they’re actually active and online, before marking them as a target directive to be executed. This will result in less failed attempts, as we only try to open a channel with a node that we know will respond to our request.
Distinct Network Macaroons
In prior versions of
lnd, a single set of macaroons were used for all possible networks (testnet, simnet, mainnet). This approach was flawed however, due to the fact that if one gave out a (possibly attenuated) macaroon for say testnet, then that same macaroon would be utilized for mainnet. This new version of
lnd has now reverted this behavior, in favor of network specific macaroons. With this change, the default location of all macaroons has been modified, and the behavior of
lncli change as well. Once users upgrade, a new set of macaroons will be created under the chain data directory for each supported network. For example, one can find the
testnet invoice macaroon for Bitcoin at:
Most other behavior has been left unchanged, however, one must now also specify the target
network (and also possibly
chain) when using
lncli, via the new set of arguments:
--chain value, -c value the chain lnd is running on e.g. bitcoin (default: "bitcoin")
--network value, -n value the network lnd is running on e.g. mainnet, testnet, etc. (default: "mainnet")
Any scripts or gRPC programs will need to be modified in order to utilize the new set of macaroons, as any other prior created macaroons are now invalidated, as the root macaroon key has been regenerated.
Revamp of HTLC Pathfinding Algorithm
A number of changes have been made to the default HTLC path finding algorithm to fix exiting bugs in our fee calculation, edge weighting, and also fee ceiling enforcement.
The old path finding algorithm would proceed to look for a path to the destination, starting from a given source (
lnd). The old algorithm had several issues, namely: we would take into account our first outgoing edge in the edge weighting, we wouldn’t properly factor in the carry over backwards in the route as fees were added meaning we wouldn’t compute the fee ceiling properly, and finally we would skew the path finding based on our own outbound routing policies. The new path finding code fixes all of these issues, and also revamps our testing infrastructure to make it easy to add new test cases in the future.
Our old weighting function would at times prefer a route with higher fees but an identical timelock, over a shorter route with lower fees and a similar timelock. Rather than try to scale the fees or timelock to be promotional, we instead now normalize the timelock values to essentially act as “extra fees”. We borrow the terminology of a “risk factor” from c-lightning. Check out this table from the original PR for a demonstration w.r.t how this improves our path selection given a set of candiate routes.
Finally, the prior path finding code had a bug where it wouldn’t properly carry over the fees from the prior hop when traversing backwards to convert a path into a route. This issue would cause unnecessary HTLC routing errors when routing over edges with a particular configuration fee wise. This new release of lnd fixes this bug by ensuring we properly compute and carry over fees which allows us to properly detect the case where a link can carry the initial amount, but once we factor in fees, it can no longer carry the final HTLC.
contractcourt Reliability Improvements
A number of bug fixes and reliability improvements has been made within the
contractcourt, the sub-system that lnd uses to handle all on-chain interaction related to contract (such as HTLCs, etc). We now ensure that the handoff of a closed channel to the resolver which will ultimately resolve any pending contract is fully reliable.
Optional NAT Traversal (NAT-PMP + UPnP)
lnd has now gained the ability to optionally attempt NAT traversal so clients that are behind at NAT are able to establish incoming connections from other peers in the network. The current system will try either NAT-PMP or UPnP to punch a hole in the nat, which ever of them works first. If
lnd is unable to punch a hole, then it will fail to start in order to inform the users that the networking maneuver was unsuccessful. Additionally,
lnd will spawn a background goroutine which will periodically poll the router to see if the external IP has changed, if so, then we’ll send out a new announcement on the network so that nodes always reach us at our latest IP address. This feature will be useful for those that cannot obtain static IPs where they run their nodes, and instead have a dynamic IP address which changes every few hours/days.
In order to activate the auto NAT traversal use the following argument:
--nat Toggle NAT traversal support (using either UPnP or NAT-PMP) to automatically advertise your external IP address to the network -- NOTE this does not support devices behind multiple NATs
Unix Socket Support for RPC
The primary gRPC server is now able to listen on unix sockets! An example of a valid configuration is:
Robust Streaming Notification Delivery for Received Payments
In this new version of
lnd, we modify the streaming invoice subscription API slightly in order to allow callers to have assurance that they haven’t missed any new payments. The
SubscribeInvoice API now has two new values:
settle_index. To match these new values, the
Invoice message has also gained a similar set of fields. These two indexes effectively act as an event time series: each time a new invoice is added the
add_index will be incremented, and each time a new invoice is settled the
settle_index will be incremented. With this new feature, clients can now specify one or both of these new optional fields with the last index they know of. If specified, then we’ll query the database to find all events greater than this index, and then deliver these backlog notifications before sending out any new notifications.
Care has been taken to ensure that the new API is backwards compatible with the expectations of the old API. Namely, if the fields aren’t specified (are zero), then no backlog notifications will be delivered. As a result, the index on-disk actually starts at 1.
A database migration has been created in order to upgrade old databases to the new invoice schema that has these two new indexes which need to be updated each time a new invoice has been added, or an exiting one settled.
Finally, a new field has been added to the on-disk
AmtPaid. This new field allows the link to commit exactly what value was accepted for the final invoice. This is important as invoices may have not have any value attached to them at all (“donation” invoices), or it may be the case that the invoice was overpaid. In either case, the final value accepted for an invoice will now be stored on disk, and queryable over the RPC interface.
ListInvoices command can now optionally be paginated. This was added as after a certain amount of invoice have been created, we can no longer return them in a single response over gRPC. On the command line, a new set of arguments have been added to control the pagination:
⛰ lncli listinvoices -h
lncli listinvoices - List all invoices currently stored.
lncli listinvoices [command options] [arguments...]
--pending_only toggles if all invoices should be returned, or only those that are currently unsettled
--index_offset value the number of invoices to skip (default: 0)
--max_invoices value the max number of invoices to return (default: 0)
ClosedChannels RPC has been added which will allow users to query for their historical closed channel state. The new command allows users to query for a particular close type as well:
⛰i lncli closedchannels -h
lncli closedchannels - List all closed channels.
lncli closedchannels [command options] [arguments...]
--cooperative list channels that were closed cooperatively
--local_force list channels that were force-closed by the local node
--remote_force list channels that were force-closed by the remote node
--breach list channels for which the remote node attempted to broadcast a prior revoked channel state
--funding_canceled list channels that were never fully opened
On-Chain Fee Management
On-chain fee management within
lnd has been revamped in order to fix a number of errors related to fees being too low, and rounding errors that can occur when converting between vsize and weight. With these changes, we now use the
kilo-weight unit everywhere internally, and now also ensure that we never dip below the widely used min relay fee on the network. In the past there were many issues related to funds not being swept from contracts due to sweeping transaction not propagating during times when fees on mainnet and testnet where very low.
The full list of changes since
0.4.2-beta can be found here:
Contributors (Alphabetical Order)
- Ben Woosley
- Brenden Matthews
- Conner Fromknecht
- Dan Bolser
- Johan T. Halseth
- John Griffith
- Joost Jager
- Lightning Koala
- Matthew Lilley
- Offer Markovich
- Olaoluwa Osuntokun
- Oliver Gugger
- Phil Opaola
- Rudy Godoy
- Rui Gomes
- Sebastian Delgado
- Stefan Menzel
- Suriyaa ✌️️
- Vadym Popov
- Valentine Wallace
- Vegard Engen
- Wilmer Paulino
- Xinxi Wang
- Yaacov Akiba Slama
- Yohei Okada