`deregister_peer` does the exact same thing as `close_connection` so
just remove it.
Also removes an unnecessary loop dispatch call (because we *have* to be
in the logic thread already to be able to touch the variables we are
touching before the dispatch).
- disable reachability testing with config option; required to be done on testnet
- reachability testing pipeline through link_manager executes pings similar to storage server. connection established hook reports successful reachability, while connection closed callback (with non-default error code) reports unsuccessful testing
Periodically clients will fetch the set of RouterIDs for all relays on
the network. It will request this list from a number (12, currently) of
relays, but as we are likely to be requesting from more relays than we
want to have edge connections, this request will itself be relayed to
the target source via one of our edges. As we can't trust our edge to
do this honestly, the responses are signed by the source relay.
TODO: the responses from all (12) relays are collected, then processed
together. The reconciliation of their responses is not yet implemented.
TODO: the source selection for this method obviously requires sources to
begin with, but this is the method by which we learn of
those...bootstrapping is still a bit in-progress, and will need to be
finished for this.
TODO: make Router call this periodically, as with RC fetching.
This command will be called periodically by clients to maintain a list
of RCs of active relay nodes. It will require another command (future
commit) to fetch the RouterIDs from many nodes and reconcile those so we
have some notion of good-ness of the RCs we're getting; if we get what
seems to be a bad set of RCs (this concept not yet implemented), we will
choose a different relay to fetch RCs from. These are left as TODOs for
now.
We will want some notion of "when did we receive it" for RCs (or
RouterIDs, details tbd), but that will be per-source as a means to form
some metric of consensus/trust on which relays are *actually* on the
network. Clients don't have a blockchain daemon to pull this from, so
they have to ask many relays for the full list of relays and form a
trust model on that (bootstrapping problem notwithstanding).
We're removing the notion of find/lookup a singular RC, so this gets rid
of all functions which did that and replaces their usages with something
sensible.
RC "lookup" is being replaced with "gimme all recently updated RCs". As
such, doing a lookup on a specific RC is going away, as is network
exploration, so a lot of what RCLookupHandler was doing will no longer
be relevant. Functionality from it which was kept has moved to NodeDB,
as it makes sense for that functionality to live where the RCs live.
change path control message inner message response to take just a
string, which will be a bt-encoded response with an early key for
status. If there is a timeout we pass a bt dict that only has that as
the status, else the response we de-onioned should have either an OK
status or some other error.
change messages to use new status key
correctly call Path::EnterState on path build response
- control messages can be sent along a path
- the path owner onion-encrypts the "inner" message for each hop in the
path
- relays on the path will onion the payload in both directions, such
that the terminal relay will get the plaintext "inner" message and the
client will get the plaintext "response" to that.
- control messages have (mostly, see below) been changed to be invokable
either over a path or directly to a relay, as appropriate.
TODO:
- exit messages need looked at, so they have not yet been changed for
this
- path transfer messages (traffic from client to client over 2 paths
with a shared "pivot") are not yet implemented
- RemoteRC supplants most of the functionality throughout the code of RouterContact
- Next step will be to sort out CI issues, then see if we can get rid of either LocalRC (and therefore RouterContact entirely)
- includes are now sorted in consistent, logical order; first step in an attempt to fix the tomfoolery (no relation to Tom) brought in by include-what-you-use
- shuffled around some cmake linking to simplify dependency graph
- superfluous files removed
- Added call_get to ev.hpp to queue event loop operations w/ a return value
- de-mutexed NodeDB and made all operations via event loop. Some calls to NodeDB methods (like ::put_if_newer) were wrapped in call->get's, but some weren't. All function bodies were using mutex locks
- libsodium calls streamlined and moved away from stupid typedefs
- buffer handling taken away from buffer_t and towards ustrings and strings
- lots of stuff deleted
- team is working well
- re-implementing message handling in proper link_manager methods
- bumped version to latest main branch commit
- wired up callbacks to set RPC request stream on creation
- methods for I/O of control and data messages through link_manager
- llarp/router/router.hpp, route_poker, and platform code moved to libquic Address types
- implementing required methods in link_manager for connection establishment
- coming along nicely
TODO:
- set up all the callbacks for libquic
- define control message requests, responses, commands
- plug new control messages into lokinet (path creation, network state, etc)
- plug connection state changes (established, failed, closed, etc.) into lokinet
- lots of cleanup and miscellanea