- new btdc method used to ensure no junk at the end of our bt data
- DRYed out the RC code
- check inbound bootstraps against all registered routers, not just whitelist
- libquic vbump
- redoing link_manager functions again to implement previously ignored review comments on several PRs
- conceptually merging "whitelist_routers" and new "known_{rids,rcs}", s.t. we can completely eliminate white/red/gray/green/etc lists in favor of something that isn't dumb
- disable reachability testing with config option; required to be done on testnet
- reachability testing pipeline through link_manager executes pings similar to storage server. connection established hook reports successful reachability, while connection closed callback (with non-default error code) reports unsuccessful testing
- bootstrap cooldown implemented with 1min timer in case all bootstraps fail
- set comparison implemented in non-initial and non-bootstrap rc fetching; set comparison in rid fetching is done every fetch
- nodedb get_random functions refactored into conditional/non-conditional methods. Conditional search implements reservoir sampling for one-pass accumulation of n random rcs
This command will be called periodically by clients to maintain a list
of RCs of active relay nodes. It will require another command (future
commit) to fetch the RouterIDs from many nodes and reconcile those so we
have some notion of good-ness of the RCs we're getting; if we get what
seems to be a bad set of RCs (this concept not yet implemented), we will
choose a different relay to fetch RCs from. These are left as TODOs for
now.
Relays will now re-sign and gossip their RCs every 6 hours (minus a
couple random minutes) using the new gossip_rc message.
Removes the old RCGossiper concept
We're removing the notion of find/lookup a singular RC, so this gets rid
of all functions which did that and replaces their usages with something
sensible.
RC "lookup" is being replaced with "gimme all recently updated RCs". As
such, doing a lookup on a specific RC is going away, as is network
exploration, so a lot of what RCLookupHandler was doing will no longer
be relevant. Functionality from it which was kept has moved to NodeDB,
as it makes sense for that functionality to live where the RCs live.
- RemoteRC supplants most of the functionality throughout the code of RouterContact
- Next step will be to sort out CI issues, then see if we can get rid of either LocalRC (and therefore RouterContact entirely)
- includes are now sorted in consistent, logical order; first step in an attempt to fix the tomfoolery (no relation to Tom) brought in by include-what-you-use
- shuffled around some cmake linking to simplify dependency graph
- superfluous files removed
- almost all errors have been commented out for refactor or already refactored
- committing this prior to sorting out the cmake structure
- upcoming include-what-you-use application
Whitelisting is now always-on for relays. Disabling the option is never
used and is unsupported/unmaintained (it was, in theory, to allow
lokinet as a separate usage in a non-service-node relay mode, i.e. on a
completely separate network).
Confusingly, the option was enabled by the `[lokid]:enabled` config
parameter.
- Added call_get to ev.hpp to queue event loop operations w/ a return value
- de-mutexed NodeDB and made all operations via event loop. Some calls to NodeDB methods (like ::put_if_newer) were wrapped in call->get's, but some weren't. All function bodies were using mutex locks
- libsodium calls streamlined and moved away from stupid typedefs
- buffer handling taken away from buffer_t and towards ustrings and strings
- lots of stuff deleted
- team is working well
- re-implementing message handling in proper link_manager methods
- bumped version to latest main branch commit
- wired up callbacks to set RPC request stream on creation
- methods for I/O of control and data messages through link_manager
- llarp/router/router.hpp, route_poker, and platform code moved to libquic Address types
- implementing required methods in link_manager for connection establishment
- coming along nicely
- `::handle_message` is transposed; Rather than the message calling the method and taking a reference to the router, the router should have a handle_message method and take a reference to the message
- `::EndcodeBuffer` takes a string reference, to which the result of `::bt_encode()` is assigned
TODO:
- set up all the callbacks for libquic
- define control message requests, responses, commands
- plug new control messages into lokinet (path creation, network state, etc)
- plug connection state changes (established, failed, closed, etc.) into lokinet
- lots of cleanup and miscellanea
-- Moved all RPCServer initialization logic to rpcserver constructor
-- Fixed config logic, fxn binding to rpc address, fxn adding rpc cats
-- router hive failed CI/CD resulting from outdated reference to rpcBindAddr
-- ipc socket as default hidden from windows (for now)
refactored config endpoint
- added rpc call script (contrib/omq-rpc.py)
- added new fxns to .ini config stuff
- added delete .ini file functionality to config endpoint
- added edge case control for config endpoint
add commented out line in clang-form for header reorg later
* Updated RpcServer Initialization and Logic
-- Moved all RPCServer initialization logic to rpcserver constructor
-- Fixed config logic, fxn binding to rpc address, fxn adding rpc cats
-- router hive failed CI/CD resulting from outdated reference to rpcBindAddr
-- ipc socket as default hidden from windows (for now)
the win32 and sd_notify components provided a disjointed set of
similar high level functionality so we consolidate these duplicate
code paths into one that has the same lifecycle regardless of platform
to reduce complexity of this feature.
this new component is responsible for reporting state changes to the
system layer and optionally propagating state change to lokinet
requested by the system layer (used by windows service).
Currently (from a recent PR) we aren't pinging oxend if not active, but
that behaviour ended up being quite wrong because lokinet needs to ping
even when decommissioned or deregistered (when decommissioned we need
the ping to get commissioned again, and if not registered we need the
ping to get past the "lokinet isn't pinging" nag screen to prepare a
registration).
This considerably revises the pinging behaviour:
- We ping oxend *unless* there is a specific error with our connections
(i.e. we *should* be establishing peer connections but don't have any)
- If we do have such an error, we send a new oxend "error" ping to
report the error to oxend and get oxend to hold off on sending uptime
proofs.
Along the way this also changes how we handle the current node state:
instead of just tracking deregistered/decommissioned, we now track three
states:
- LooksRegistered -- which means the SN is known to the network (but not
necessarily active or fully staked)
- LooksFunded -- which means it is known *and* is fully funded, but not
necessarily active
- LooksDecommissioned -- which means it is known, funded, and not
currently active (which implies decommissioned).
The funded (or more precisely, unfunded) state is now tracked in
rc_lookup_handler in a "greenlist" -- i.e. new SNs that are so new (i.e.
"green") that they aren't even fully staked or active yet.