MinFeeRate is minimum fee rate that must be used by a batch of
the sweep. If it is specified, confTarget is ignored.
This is useful for external source of fees.
This is needed to cover the code of SQLStore with tests.
To achieve compatibility with loopdb (SQLite), the following changes were done:
- DestAddr is filled to avoid crash in SQL layer
- Preimage is filled to avoid uniqueness checks by the DB
- the code working with batch IDs was changed to work correctly
when batch_id starts with 1 instead of 0
- SQL swap store has to have a swap with swap_hash from sweep
to satisfy foreign key constraint of the DB
Provide a wrapped store type, exposing ExecTx method with a subset
interface in the callback argument. BaseDB interfaces in instantout,
reservation and sweepbatcher use ExecTx with their subset Querier
instead of whole sqlc.Querier (*sqlc.Queries).
This is needed to make the packages more reusable, so they don't
depend on methods of *sqlc.Queries they don't use.
A Querier holds the methods of sqlc.Querier interface relevant for a package.
BaseDB has Querier + ExecTx method.
This change is needed to simplify further rework of ExecTx.
The race was detected in CI and locally when running with -race.
It happened between the following calls:
WARNING: DATA RACE
Write at 0x00c0003e6638 by goroutine 1374:
runtime.racewrite()
<autogenerated>:1 +0x1e
github.com/lightninglabs/loop/sweepbatcher.(*batch).Wait()
sweepbatcher/sweep_batch.go:463 +0x6e
github.com/lightninglabs/loop/sweepbatcher.(*Batcher).Run.func1()
sweepbatcher/sweep_batcher.go:272 +0x10e
Previous read at 0x00c0003e6638 by goroutine 1388:
runtime.raceread()
<autogenerated>:1 +0x1e
github.com/lightninglabs/loop/sweepbatcher.(*batch).monitorConfirmations()
sweepbatcher/sweep_batch.go:1144 +0x285
github.com/lightninglabs/loop/sweepbatcher.(*batch).handleSpend()
sweepbatcher/sweep_batch.go:1309 +0x10e4
github.com/lightninglabs/loop/sweepbatcher.(*batch).Run()
sweepbatcher/sweep_batch.go:526 +0xb04
github.com/lightninglabs/loop/sweepbatcher.(*Batcher).spinUpBatch.func1()
sweepbatcher/sweep_batcher.go:455 +0xbd
The race was caused because wg.Add(1) and wg.Wait() were running from different
goroutines (one goroutine was running batch.Run() and another - batcher.Run()).
To avoid this scenario, wg.Wait() call was moved into batch.Run() call, so it
waits itself for its children goroutines, after which the channel b.finished
is closed, and it serves a signal for external waiters (the batcher, calling
batch.Wait()).
Also the channel batch.stopped was renamed to batch.stopping to better reflect
its nature.
Added TestSweepBatcherCloseDuringAdding to make sure adding a sweep during
shutting down does not cause a crash. The test did not catch the original
race condition.
Changed argument of function NewBatcher from LoopOutFetcher to SweepFetcher
(returning new public type SweepInfo).
This change is backwards-incompatible on the package layer, but nobody seems
to use the package outside of Loop.
To use NewBatcher inside Loop, turn loopdb into SweepFetcher using
function NewSweepFetcherFromSwapStore.
This data is not used by Batcher since commit
"sweepbatcher: load swap from loopdb, not own store".
Now sweepbatcher.Store depends only on tables sweeps and sweep_batches
and does not depend on swaps, loopout_swaps and htlc_keys, making it
easier to reuse.
Method Store.GetBatchSweeps provides data from tables outside of sweepbatcher:
swaps, loopout_swaps, htlc_keys. It makes it harder to reuse. Batcher already
has a straightforward way to get swap data: LoopOutFetcher interface (loopdb).
In this commit I switch the source of data from the field returned by Store
(LoopOut) to loading independently by calling LoopOutFetcher.FetchLoopOutSwap.
It used to be set to default (defaultBatchConfTarget = 12) which
could in theory affect fee rate if updateRbfRate() and publish()
were not called before the batch was saved. (Unlikely scenario.)
Method sweepbatcher.Store.FetchBatchSweeps (implementation using real DB) runs
JOIN query to load LoopOut from swaps table. Now the mock does the same.
It is needed to test store and load scenarios in tests.
If the sweep was successfully updated in the batch, no need to
try to add it to all other batches.
Added a test reproducing adding a sweep to both batches without this change.
Previously storing an empty batch would make the batcher fail to start
as spinning up a restored batch assumes that there's a primary sweep
added already. As there's no point in spinning up such batch we can just
skip over it.
Furthermore we'll ensure that we won't try to ever publish an empty
batch to avoid setting the fee rate too early.
Previously we'd report the fees per sweep as the total sweep cost of a
batch. With this change the reported cost will be the proportional fee
which should be equal for all sweeps except if there's any rounding
difference in which case that is paid by the sweep belonging to the
first input of the batch tx.