- includes are now sorted in consistent, logical order; first step in an attempt to fix the tomfoolery (no relation to Tom) brought in by include-what-you-use
- shuffled around some cmake linking to simplify dependency graph
- superfluous files removed
- almost all errors have been commented out for refactor or already refactored
- committing this prior to sorting out the cmake structure
- upcoming include-what-you-use application
Whitelisting is now always-on for relays. Disabling the option is never
used and is unsupported/unmaintained (it was, in theory, to allow
lokinet as a separate usage in a non-service-node relay mode, i.e. on a
completely separate network).
Confusingly, the option was enabled by the `[lokid]:enabled` config
parameter.
- Added call_get to ev.hpp to queue event loop operations w/ a return value
- de-mutexed NodeDB and made all operations via event loop. Some calls to NodeDB methods (like ::put_if_newer) were wrapped in call->get's, but some weren't. All function bodies were using mutex locks
- libsodium calls streamlined and moved away from stupid typedefs
- buffer handling taken away from buffer_t and towards ustrings and strings
- lots of stuff deleted
- team is working well
- re-implementing message handling in proper link_manager methods
- bumped version to latest main branch commit
- wired up callbacks to set RPC request stream on creation
- methods for I/O of control and data messages through link_manager
- llarp/router/router.hpp, route_poker, and platform code moved to libquic Address types
- implementing required methods in link_manager for connection establishment
- coming along nicely
- `::handle_message` is transposed; Rather than the message calling the method and taking a reference to the router, the router should have a handle_message method and take a reference to the message
- `::EndcodeBuffer` takes a string reference, to which the result of `::bt_encode()` is assigned
TODO:
- set up all the callbacks for libquic
- define control message requests, responses, commands
- plug new control messages into lokinet (path creation, network state, etc)
- plug connection state changes (established, failed, closed, etc.) into lokinet
- lots of cleanup and miscellanea
-- Moved all RPCServer initialization logic to rpcserver constructor
-- Fixed config logic, fxn binding to rpc address, fxn adding rpc cats
-- router hive failed CI/CD resulting from outdated reference to rpcBindAddr
-- ipc socket as default hidden from windows (for now)
refactored config endpoint
- added rpc call script (contrib/omq-rpc.py)
- added new fxns to .ini config stuff
- added delete .ini file functionality to config endpoint
- added edge case control for config endpoint
add commented out line in clang-form for header reorg later
* Updated RpcServer Initialization and Logic
-- Moved all RPCServer initialization logic to rpcserver constructor
-- Fixed config logic, fxn binding to rpc address, fxn adding rpc cats
-- router hive failed CI/CD resulting from outdated reference to rpcBindAddr
-- ipc socket as default hidden from windows (for now)
the win32 and sd_notify components provided a disjointed set of
similar high level functionality so we consolidate these duplicate
code paths into one that has the same lifecycle regardless of platform
to reduce complexity of this feature.
this new component is responsible for reporting state changes to the
system layer and optionally propagating state change to lokinet
requested by the system layer (used by windows service).
Currently (from a recent PR) we aren't pinging oxend if not active, but
that behaviour ended up being quite wrong because lokinet needs to ping
even when decommissioned or deregistered (when decommissioned we need
the ping to get commissioned again, and if not registered we need the
ping to get past the "lokinet isn't pinging" nag screen to prepare a
registration).
This considerably revises the pinging behaviour:
- We ping oxend *unless* there is a specific error with our connections
(i.e. we *should* be establishing peer connections but don't have any)
- If we do have such an error, we send a new oxend "error" ping to
report the error to oxend and get oxend to hold off on sending uptime
proofs.
Along the way this also changes how we handle the current node state:
instead of just tracking deregistered/decommissioned, we now track three
states:
- LooksRegistered -- which means the SN is known to the network (but not
necessarily active or fully staked)
- LooksFunded -- which means it is known *and* is fully funded, but not
necessarily active
- LooksDecommissioned -- which means it is known, funded, and not
currently active (which implies decommissioned).
The funded (or more precisely, unfunded) state is now tracked in
rc_lookup_handler in a "greenlist" -- i.e. new SNs that are so new (i.e.
"green") that they aren't even fully staked or active yet.
If running as a service node, we ping core on a regular interval to
inform it we're running and in a good state. If we're an active
(not decommissioned or deregistered) service node and have too few
peers and thus we're not actually connected to lokinet, we should skip
that ping so core doesn't think we're ok.
* wintun vpn platform for windows
* bundle config snippets into nsis installer for exit node, keyfile persisting, reduced hops mode.
* use wintun for vpn platform
* isolate all windows platform specific code into their own compilation units and libraries
* split up internal libraries into more specific components
* rename liblokinet.a target to liblokinet-amalgum.a to elimiate ambiguity with liblokinet.so
* DNS platform for win32
* rename llarp/ev/ev_libuv.{c,h}pp to llarp/ev/libuv.{c,h}pp as the old name was idiotic
* split up net platform into win32 and posix specific compilation units
* rename lokinet_init.c to easter_eggs.cpp as that is what they are for and it does not need to be a c compilation target
* add cmake option STRIP_SYMBOLS for seperating out debug symbols for windows builds
* intercept dns traffic on all interfaces on windows using windivert and feed it into lokinet
we want to be able to have multiple locally bound dns sockets in lokinet so
i restructured most of the dns subsystem in order to make this easier.
specifically, we have a new structure to dns subsystem:
* dns::QueryJob_Base
base type for holding a dns query and response with virtual methods
in charge of sending a reply to whoever requested.
* dns::PacketSource_Base
base type for reading and writing dns messages to and from wherever they came from
* dns::Resolver_Base
base type for filtering and handling of dns messages asynchronously.
* dns::Server
contextualized per endpoint dns object, responsible for all dns related isms.
this change hides all impelementation details of all of the dns components.
adds some more helper functions for parsing dns and dealing with OwnedBuffer.
overall dns becomes less of a pain with this new structure. probably.
* immediately poke routes when we are told to use an exit so that packets get pushed which makes an exit path happen
* fix up cmake oddity in nsis section
* not gossip our rc
* not explore the network to prevent outbound session attempts
* not establish sessions to other service nodes
* close all open sessions we have to tell clients we don't want them
* catch exceptions flushing peerdb in disk thread
* don't connect out to non allowed routers
* simplify logic in RCLookupHandler::RemoteIsAllowed()
* add HaveReceivedWhitelist to I_RCLookupHandler base type
* add LooksDeregistered to Router type that tells us if we think we are deregistered
* don't allow building paths over us if we are deregistered
* add srv records in RCs if we have any
* add mechanism to add SRV records for plainquic exposed ports
* resign and republish rc or introset on srv record changes
All #ifndef guards on headers have been removed, I think,
in favor of #pragma once
Headers are now included as `#include "filename"` if the included file
resides in the same directory as the file including it, or any
subdirectory therein. Otherwise they are included as
`#include <project/top/dir/relative/path/filename>`
The above does not include system/os headers.