lokinet/llarp/service/endpoint.cpp

2135 lines
60 KiB
C++
Raw Normal View History

#include <chrono>
#include <memory>
#include "endpoint.hpp"
#include <llarp/dht/context.hpp>
#include <llarp/dht/key.hpp>
#include <llarp/dht/messages/findintro.hpp>
#include <llarp/dht/messages/findname.hpp>
#include <llarp/dht/messages/findrouter.hpp>
#include <llarp/dht/messages/gotintro.hpp>
#include <llarp/dht/messages/gotname.hpp>
#include <llarp/dht/messages/gotrouter.hpp>
#include <llarp/dht/messages/pubintro.hpp>
#include <llarp/nodedb.hpp>
#include <llarp/profiling.hpp>
#include <llarp/router/abstractrouter.hpp>
#include <llarp/routing/dht_message.hpp>
#include <llarp/routing/path_transfer_message.hpp>
#include "endpoint_state.hpp"
#include "endpoint_util.hpp"
#include "hidden_service_address_lookup.hpp"
2021-03-12 17:41:48 +00:00
#include "net/ip.hpp"
#include "outbound_context.hpp"
#include "protocol.hpp"
2021-03-12 17:41:48 +00:00
#include "service/info.hpp"
#include "service/protocol_type.hpp"
#include <llarp/util/str.hpp>
#include <llarp/util/buffer.hpp>
#include <llarp/util/meta/memfn.hpp>
#include <llarp/link/link_manager.hpp>
#include <llarp/tooling/dht_event.hpp>
QUIC lokinet integration refactor Refactors how quic packets get handled: the actual tunnels now live in tunnel.hpp's TunnelManager which holds and manages all the quic<->tcp tunnelling. service::Endpoint now holds a TunnelManager rather than a quic::Server. We only need one quic server, but we need a separate quic client instance per outgoing quic tunnel, and TunnelManager handles all that glue now. Adds QUIC packet handling to get to the right tunnel code. This required multiplexing incoming quic packets, as follows: Adds a very small quic tunnel packet header of 4 bytes: [1, SPORT, ECN] for client->server packets, where SPORT is our source "port" (really: just a uint16_t unique quic instance identifier) or [2, DPORT, ECN] for server->client packets where the DPORT is the SPORT from above. (This also reworks ECN bits to get properly carried over lokinet.) We don't need a destination/source port for the server-side because there is only ever one quic server (and we know we're going to it when the first byte of the header is 1). Removes the config option for quic exposing ports; a full lokinet will simply accept anything incoming on quic and tunnel it to the requested port on the the local endpoint IP (this handler will come in a following commit). Replace ConvoTags with full addresses: we need to carry the port, as well, which the ConvoTag can't give us, so change those to more general SockAddrs from which we can extract both the ConvoTag *and* the port. Add a pending connection queue along with new quic-side handlers to call when a stream becomes available (TunnelManager uses this to wire up pending incoming conns with quic streams as streams open up). Completely get rid of tunnel_server/tunnel_client.cpp code; it is now moved to tunnel.hpp. Add listen()/forget() methods in TunnelManager for setting up quic listening sockets (for liblokinet usage). Add open()/close() methods in TunnelManager for spinning up new quic clients for outgoing quic connections.
2021-03-23 19:26:32 +00:00
#include <llarp/quic/tunnel.hpp>
#include <llarp/util/priority_queue.hpp>
2019-09-01 13:26:16 +00:00
#include <optional>
2019-07-30 23:42:13 +00:00
#include <utility>
2021-03-12 17:41:48 +00:00
#include <llarp/quic/server.hpp>
#include <llarp/quic/tunnel.hpp>
#include <uvw.hpp>
#include <variant>
2021-03-12 17:41:48 +00:00
namespace llarp
{
namespace service
{
static auto logcat = log::Cat("endpoint");
Endpoint::Endpoint(AbstractRouter* r, Context* parent)
: path::Builder{r, 3, path::default_len}
, context{parent}
, m_InboundTrafficQueue{512}
, m_SendQueue{512}
, m_RecvQueue{512}
, m_IntrosetLookupFilter{5s}
{
m_state = std::make_unique<EndpointState>();
m_state->m_Router = r;
m_state->m_Name = "endpoint";
2019-11-28 23:08:02 +00:00
m_RecvQueue.enable();
QUIC lokinet integration refactor Refactors how quic packets get handled: the actual tunnels now live in tunnel.hpp's TunnelManager which holds and manages all the quic<->tcp tunnelling. service::Endpoint now holds a TunnelManager rather than a quic::Server. We only need one quic server, but we need a separate quic client instance per outgoing quic tunnel, and TunnelManager handles all that glue now. Adds QUIC packet handling to get to the right tunnel code. This required multiplexing incoming quic packets, as follows: Adds a very small quic tunnel packet header of 4 bytes: [1, SPORT, ECN] for client->server packets, where SPORT is our source "port" (really: just a uint16_t unique quic instance identifier) or [2, DPORT, ECN] for server->client packets where the DPORT is the SPORT from above. (This also reworks ECN bits to get properly carried over lokinet.) We don't need a destination/source port for the server-side because there is only ever one quic server (and we know we're going to it when the first byte of the header is 1). Removes the config option for quic exposing ports; a full lokinet will simply accept anything incoming on quic and tunnel it to the requested port on the the local endpoint IP (this handler will come in a following commit). Replace ConvoTags with full addresses: we need to carry the port, as well, which the ConvoTag can't give us, so change those to more general SockAddrs from which we can extract both the ConvoTag *and* the port. Add a pending connection queue along with new quic-side handlers to call when a stream becomes available (TunnelManager uses this to wire up pending incoming conns with quic streams as streams open up). Completely get rid of tunnel_server/tunnel_client.cpp code; it is now moved to tunnel.hpp. Add listen()/forget() methods in TunnelManager for setting up quic listening sockets (for liblokinet usage). Add open()/close() methods in TunnelManager for spinning up new quic clients for outgoing quic connections.
2021-03-23 19:26:32 +00:00
if (Loop()->MaybeGetUVWLoop())
m_quic = std::make_unique<quic::TunnelManager>(*this);
}
bool
2020-05-17 17:44:00 +00:00
Endpoint::Configure(const NetworkConfig& conf, [[maybe_unused]] const DnsConfig& dnsConf)
{
2020-06-08 12:42:10 +00:00
if (conf.m_Paths.has_value())
numDesiredPaths = *conf.m_Paths;
2020-06-08 12:42:10 +00:00
if (conf.m_Hops.has_value())
numHops = *conf.m_Hops;
conf.m_ExitMap.ForEachEntry(
[&](const IPRange& range, const service::Address& addr) { MapExitRange(range, addr); });
for (auto [exit, auth] : conf.m_ExitAuths)
{
SetAuthInfoForEndpoint(exit, auth);
}
conf.m_LNSExitMap.ForEachEntry([&](const IPRange& range, const std::string& name) {
std::optional<AuthInfo> auth;
const auto itr = conf.m_LNSExitAuths.find(name);
if (itr != conf.m_LNSExitAuths.end())
auth = itr->second;
m_StartupLNSMappings[name] = std::make_pair(range, auth);
});
2020-05-14 17:51:27 +00:00
return m_state->Configure(conf);
2018-08-09 19:02:17 +00:00
}
2018-08-10 03:51:38 +00:00
bool
Endpoint::HasPendingPathToService(const Address& addr) const
{
return m_state->m_PendingServiceLookups.find(addr) != m_state->m_PendingServiceLookups.end();
2018-08-10 03:51:38 +00:00
}
void
Endpoint::RegenAndPublishIntroSet()
{
2019-11-05 16:58:53 +00:00
const auto now = llarp::time_now_ms();
m_LastIntrosetRegenAttempt = now;
std::set<Introduction, CompareIntroTimestamp> intros;
if (const auto maybe =
GetCurrentIntroductionsWithFilter([now](const service::Introduction& intro) -> bool {
return not intro.ExpiresSoon(now, path::intro_stale_threshold);
}))
{
intros.insert(maybe->begin(), maybe->end());
}
else
{
LogWarn(
"could not publish descriptors for endpoint ",
Name(),
" because we couldn't get enough valid introductions");
2021-04-12 11:39:07 +00:00
BuildOne();
return;
}
2021-04-14 15:07:06 +00:00
2021-04-14 19:40:57 +00:00
introSet().supportedProtocols.clear();
2021-04-14 15:07:06 +00:00
// add supported ethertypes
if (HasIfAddr())
{
if (IPRange::V4MappedRange().Contains(GetIfAddr()))
2021-04-14 15:07:06 +00:00
{
introSet().supportedProtocols.push_back(ProtocolType::TrafficV4);
}
else
{
introSet().supportedProtocols.push_back(ProtocolType::TrafficV6);
}
// exit related stuffo
if (m_state->m_ExitEnabled)
{
introSet().supportedProtocols.push_back(ProtocolType::Exit);
introSet().exitTrafficPolicy = GetExitPolicy();
introSet().ownedRanges = GetOwnedRanges();
}
}
// add quic ethertype if we have listeners set up
if (auto* quic = GetQUICTunnel())
{
if (quic->hasListeners())
introSet().supportedProtocols.push_back(ProtocolType::QUIC);
}
introSet().intros.clear();
for (auto& intro : intros)
{
if (introSet().intros.size() < numDesiredPaths)
introSet().intros.emplace_back(std::move(intro));
}
2021-04-14 15:07:06 +00:00
if (introSet().intros.empty())
{
2019-04-21 15:40:32 +00:00
LogWarn("not enough intros to publish introset for ", Name());
if (ShouldBuildMore(now))
ManualRebuild(1);
return;
}
auto maybe = m_Identity.EncryptAndSignIntroSet(introSet(), now);
if (not maybe)
{
2020-01-27 21:30:41 +00:00
LogWarn("failed to generate introset for endpoint ", Name());
return;
}
if (PublishIntroSet(*maybe, Router()))
{
2019-04-21 15:40:32 +00:00
LogInfo("(re)publishing introset for endpoint ", Name());
}
else
{
2019-04-21 15:40:32 +00:00
LogWarn("failed to publish intro set for endpoint ", Name());
}
}
bool
Endpoint::IsReady() const
{
const auto now = Now();
2021-04-14 15:07:06 +00:00
if (introSet().intros.empty())
return false;
if (introSet().IsExpired(now))
return false;
return true;
}
bool
Endpoint::HasPendingRouterLookup(const RouterID remote) const
{
const auto& routers = m_state->m_PendingRouters;
return routers.find(remote) != routers.end();
}
std::optional<std::variant<Address, RouterID>>
Endpoint::GetEndpointWithConvoTag(ConvoTag tag) const
{
auto itr = Sessions().find(tag);
if (itr != Sessions().end())
{
return itr->second.remote.Addr();
}
2019-07-30 23:42:13 +00:00
for (const auto& item : m_state->m_SNodeSessions)
{
2021-04-07 12:03:04 +00:00
if (const auto maybe = item.second->CurrentPath())
{
2021-04-07 12:03:04 +00:00
if (ConvoTag{maybe->as_array()} == tag)
return item.first;
}
}
return std::nullopt;
}
2021-04-12 11:39:07 +00:00
void
Endpoint::LookupServiceAsync(
std::string name,
std::string service,
std::function<void(std::vector<dns::SRVData>)> resultHandler)
{
2022-04-16 16:41:34 +00:00
// handles when we aligned to a loki address
auto handleGotPathToService = [resultHandler, service, this](auto addr) {
// we can probably get this info before we have a path to them but we do this after we
// have a path so when we send the response back they can send shit to them immediately
const auto& container = m_state->m_RemoteSessions;
if (auto itr = container.find(addr); itr != container.end())
{
// parse the stuff we need from this guy
resultHandler(itr->second->GetCurrentIntroSet().GetMatchingSRVRecords(service));
return;
}
resultHandler({});
};
// handles when we resolved a .snode
auto handleResolvedSNodeName = [resultHandler, nodedb = Router()->nodedb()](auto router_id) {
std::vector<dns::SRVData> result{};
if (auto maybe_rc = nodedb->Get(router_id))
{
result = maybe_rc->srvRecords;
}
resultHandler(std::move(result));
};
// handles when we got a path to a remote thing
auto handleGotPathTo = [handleGotPathToService, handleResolvedSNodeName, resultHandler](
auto maybe_tag, auto address) {
if (not maybe_tag)
{
resultHandler({});
return;
}
if (auto* addr = std::get_if<Address>(&address))
{
// .loki case
handleGotPathToService(*addr);
}
else if (auto* router_id = std::get_if<RouterID>(&address))
{
// .snode case
handleResolvedSNodeName(*router_id);
}
2021-04-12 11:39:07 +00:00
else
{
2022-04-16 16:41:34 +00:00
// fallback case
// XXX: never should happen but we'll handle it anyways
resultHandler({});
2021-04-12 11:39:07 +00:00
}
};
2022-04-16 16:41:34 +00:00
// handles when we know a long address of a remote resource
auto handleGotAddress = [resultHandler, handleGotPathTo, this](auto address) {
// we will attempt a build to whatever we looked up
const auto result = EnsurePathTo(
address,
[address, handleGotPathTo](auto maybe_tag) { handleGotPathTo(maybe_tag, address); },
PathAlignmentTimeout());
// on path build start fail short circuit
if (not result)
resultHandler({});
};
// look up this name async and start the entire chain of events
LookupNameAsync(name, [handleGotAddress, resultHandler](auto maybe_addr) {
if (maybe_addr)
{
handleGotAddress(*maybe_addr);
}
else
{
resultHandler({});
}
});
2021-04-12 11:39:07 +00:00
}
bool
Endpoint::IntrosetIsStale() const
{
return introSet().HasExpiredIntros(Now());
}
2019-02-11 17:14:43 +00:00
util::StatusObject
Endpoint::ExtractStatus() const
2019-02-08 19:43:25 +00:00
{
auto obj = path::Builder::ExtractStatus();
obj["exitMap"] = m_ExitMap.ExtractStatus();
obj["identity"] = m_Identity.pub.Addr().ToString();
obj["networkReady"] = ReadyForNetwork();
util::StatusObject authCodes;
for (const auto& [service, info] : m_RemoteAuthInfos)
{
authCodes[service.ToString()] = info.token;
}
obj["authCodes"] = authCodes;
return m_state->ExtractStatus(obj);
2019-02-08 19:43:25 +00:00
}
2022-10-20 22:23:14 +00:00
void
Endpoint::Tick(llarp_time_t)
{
2019-11-05 16:58:53 +00:00
const auto now = llarp::time_now_ms();
2019-04-23 16:13:22 +00:00
path::Builder::Tick(now);
2018-07-19 04:58:39 +00:00
// publish descriptors
if (ShouldPublishDescriptors(now))
{
2019-11-05 16:58:53 +00:00
RegenAndPublishIntroSet();
}
// decay introset lookup filter
m_IntrosetLookupFilter.Decay(now);
// expire name cache
m_state->nameCache.Decay(now);
2018-12-13 12:27:14 +00:00
// expire snode sessions
EndpointUtil::ExpireSNodeSessions(now, m_state->m_SNodeSessions);
// expire pending tx
EndpointUtil::ExpirePendingTx(now, m_state->m_PendingLookups);
2018-08-14 21:17:18 +00:00
// expire pending router lookups
EndpointUtil::ExpirePendingRouterLookups(now, m_state->m_PendingRouters);
2018-08-14 21:17:18 +00:00
2019-02-05 14:50:33 +00:00
// deregister dead sessions
EndpointUtil::DeregisterDeadSessions(now, m_state->m_DeadSessions);
// tick remote sessions
EndpointUtil::TickRemoteSessions(
now, m_state->m_RemoteSessions, m_state->m_DeadSessions, Sessions());
2019-02-09 14:37:24 +00:00
// expire convotags
EndpointUtil::ExpireConvoSessions(now, Sessions());
if (NumInStatus(path::ePathEstablished) > 1)
{
for (const auto& item : m_StartupLNSMappings)
{
LookupNameAsync(
item.first, [name = item.first, info = item.second, this](auto maybe_addr) {
if (maybe_addr.has_value())
{
const auto maybe_range = info.first;
const auto maybe_auth = info.second;
m_StartupLNSMappings.erase(name);
if (auto* addr = std::get_if<service::Address>(&*maybe_addr))
{
if (maybe_range.has_value())
m_ExitMap.Insert(*maybe_range, *addr);
if (maybe_auth.has_value())
SetAuthInfoForEndpoint(*addr, *maybe_auth);
}
}
});
}
}
2018-09-24 15:52:25 +00:00
}
bool
Endpoint::Stop()
{
// stop remote sessions
2022-10-31 16:46:21 +00:00
log::debug(logcat, "Endpoint stopping remote sessions.");
EndpointUtil::StopRemoteSessions(m_state->m_RemoteSessions);
// stop snode sessions
2022-10-31 16:46:21 +00:00
log::debug(logcat, "Endpoint stopping snode sessions.");
EndpointUtil::StopSnodeSessions(m_state->m_SNodeSessions);
2022-10-31 16:46:21 +00:00
log::debug(logcat, "Endpoint stopping its path builder.");
2019-04-21 15:40:32 +00:00
return path::Builder::Stop();
}
2018-07-18 03:10:21 +00:00
uint64_t
Endpoint::GenTXID()
{
uint64_t txid = randint();
const auto& lookups = m_state->m_PendingLookups;
while (lookups.find(txid) != lookups.end())
2018-07-18 03:10:21 +00:00
++txid;
return txid;
}
2018-07-16 03:32:13 +00:00
std::string
Endpoint::Name() const
{
return m_state->m_Name + ":" + m_Identity.pub.Name();
2018-07-16 03:32:13 +00:00
}
2018-08-04 02:59:32 +00:00
void
Endpoint::PutLookup(IServiceLookup* lookup, uint64_t txid)
{
m_state->m_PendingLookups.emplace(txid, std::unique_ptr<IServiceLookup>(lookup));
2018-08-04 02:59:32 +00:00
}
bool
2019-05-03 13:15:03 +00:00
Endpoint::HandleGotIntroMessage(dht::GotIntroMessage_constptr msg)
{
std::set<EncryptedIntroSet> remote;
for (const auto& introset : msg->found)
{
if (not introset.Verify(Now()))
{
2020-01-27 21:30:41 +00:00
LogError(Name(), " got invalid introset");
return false;
2018-07-18 22:50:05 +00:00
}
2019-07-06 17:03:40 +00:00
remote.insert(introset);
}
auto& lookups = m_state->m_PendingLookups;
auto itr = lookups.find(msg->txid);
if (itr == lookups.end())
2018-07-18 03:10:21 +00:00
{
LogWarn(
"invalid lookup response for hidden service endpoint ", Name(), " txid=", msg->txid);
2018-07-20 04:50:28 +00:00
return true;
2018-07-18 03:10:21 +00:00
}
std::unique_ptr<IServiceLookup> lookup = std::move(itr->second);
lookups.erase(itr);
2020-11-04 16:08:29 +00:00
lookup->HandleIntrosetResponse(remote);
2018-08-14 21:17:18 +00:00
return true;
}
bool
Endpoint::HasInboundConvo(const Address& addr) const
{
for (const auto& item : Sessions())
{
if (item.second.remote.Addr() == addr and item.second.inbound)
return true;
}
return false;
}
bool
Endpoint::HasOutboundConvo(const Address& addr) const
{
for (const auto& item : Sessions())
{
if (item.second.remote.Addr() == addr && not item.second.inbound)
return true;
}
return false;
}
2018-08-09 19:02:17 +00:00
void
Endpoint::PutSenderFor(const ConvoTag& tag, const ServiceInfo& info, bool inbound)
2018-08-09 19:02:17 +00:00
{
if (info.Addr().IsZero())
{
LogError(Name(), " cannot put invalid service info ", info, " T=", tag);
return;
}
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
2018-08-09 19:02:17 +00:00
{
if (WantsOutboundSession(info.Addr()) and inbound)
{
LogWarn(
Name(),
" not adding sender for ",
info.Addr(),
" session is inbound and we want outbound T=",
tag);
return;
}
itr = Sessions().emplace(tag, Session{}).first;
2019-07-01 13:44:25 +00:00
itr->second.inbound = inbound;
itr->second.remote = info;
2018-08-09 19:02:17 +00:00
}
}
size_t
Endpoint::RemoveAllConvoTagsFor(service::Address remote)
{
size_t removed = 0;
auto& sessions = Sessions();
auto itr = sessions.begin();
while (itr != sessions.end())
{
if (itr->second.remote.Addr() == remote)
{
itr = sessions.erase(itr);
removed++;
}
else
++itr;
}
return removed;
}
2018-08-09 19:02:17 +00:00
bool
Endpoint::GetSenderFor(const ConvoTag& tag, ServiceInfo& si) const
{
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
2018-08-09 19:02:17 +00:00
return false;
si = itr->second.remote;
si.UpdateAddr();
2018-08-09 19:02:17 +00:00
return true;
}
void
Endpoint::PutIntroFor(const ConvoTag& tag, const Introduction& intro)
{
auto& s = Sessions()[tag];
s.intro = intro;
2018-08-09 19:02:17 +00:00
}
bool
Endpoint::GetIntroFor(const ConvoTag& tag, Introduction& intro) const
{
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
2018-08-09 19:02:17 +00:00
return false;
intro = itr->second.intro;
return true;
}
2019-02-21 16:45:33 +00:00
void
Endpoint::PutReplyIntroFor(const ConvoTag& tag, const Introduction& intro)
{
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
2019-02-21 16:45:33 +00:00
{
return;
2019-02-21 16:45:33 +00:00
}
itr->second.replyIntro = intro;
}
bool
Endpoint::GetReplyIntroFor(const ConvoTag& tag, Introduction& intro) const
{
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
2019-02-21 16:45:33 +00:00
return false;
intro = itr->second.replyIntro;
return true;
}
2018-08-09 19:02:17 +00:00
bool
Endpoint::GetConvoTagsForService(const Address& addr, std::set<ConvoTag>& tags) const
2018-08-09 19:02:17 +00:00
{
return EndpointUtil::GetConvoTagsForService(Sessions(), addr, tags);
2018-08-09 19:02:17 +00:00
}
bool
Endpoint::GetCachedSessionKeyFor(const ConvoTag& tag, SharedSecret& secret) const
2018-08-09 19:02:17 +00:00
{
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
2018-08-09 19:02:17 +00:00
return false;
secret = itr->second.sharedKey;
2018-08-09 19:02:17 +00:00
return true;
}
void
Endpoint::PutCachedSessionKeyFor(const ConvoTag& tag, const SharedSecret& k)
{
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
2018-08-09 19:02:17 +00:00
{
itr = Sessions().emplace(tag, Session{}).first;
2018-08-09 19:02:17 +00:00
}
itr->second.sharedKey = k;
}
2019-09-19 20:28:12 +00:00
void
Endpoint::ConvoTagTX(const ConvoTag& tag)
2019-09-19 20:28:12 +00:00
{
if (Sessions().count(tag))
Sessions()[tag].TX();
}
void
Endpoint::ConvoTagRX(const ConvoTag& tag)
{
if (Sessions().count(tag))
Sessions()[tag].RX();
2019-09-19 20:28:12 +00:00
}
bool
Endpoint::LoadKeyFile()
{
const auto& keyfile = m_state->m_Keyfile;
if (!keyfile.empty())
{
m_Identity.EnsureKeys(keyfile, Router()->keyManager()->needBackup());
}
else
{
m_Identity.RegenerateKeys();
}
return true;
}
bool
Endpoint::Start()
{
// how can I tell if a m_Identity isn't loaded?
if (!m_DataHandler)
2018-08-09 19:02:17 +00:00
{
m_DataHandler = this;
}
2018-08-16 14:34:15 +00:00
// this does network isolation
while (m_state->m_OnInit.size())
2018-08-09 19:02:17 +00:00
{
if (m_state->m_OnInit.front()())
m_state->m_OnInit.pop_front();
2018-08-09 19:02:17 +00:00
else
{
2019-04-21 15:40:32 +00:00
LogWarn("Can't call init of network isolation");
2018-08-09 19:02:17 +00:00
return false;
}
2018-08-09 19:02:17 +00:00
}
return true;
}
// Keep this here (rather than the header) so that we don't need to include endpoint_state.hpp
// in endpoint.hpp for the unique_ptr member destructor.
Endpoint::~Endpoint() = default;
2020-02-10 17:52:24 +00:00
bool
Endpoint::PublishIntroSet(const EncryptedIntroSet& introset, AbstractRouter* r)
2020-02-10 17:52:24 +00:00
{
const auto paths = GetManyPathsWithUniqueEndpoints(
this,
llarp::dht::IntroSetRelayRedundancy,
dht::Key_t{introset.derivedSigningKey.as_array()});
if (paths.size() != llarp::dht::IntroSetRelayRedundancy)
{
LogWarn(
"Cannot publish intro set because we only have ",
paths.size(),
" paths, but need ",
llarp::dht::IntroSetRelayRedundancy);
return false;
}
2020-02-10 17:34:14 +00:00
// do publishing for each path selected
size_t published = 0;
for (const auto& path : paths)
2020-02-10 17:34:14 +00:00
{
for (size_t i = 0; i < llarp::dht::IntroSetRequestsPerRelay; ++i)
2020-02-10 17:34:14 +00:00
{
r->NotifyRouterEvent<tooling::PubIntroSentEvent>(
2020-03-07 01:20:11 +00:00
r->pubkey(),
llarp::dht::Key_t{introset.derivedSigningKey.as_array()},
RouterID(path->hops[path->hops.size() - 1].rc.pubkey),
published);
if (PublishIntroSetVia(introset, r, path, published))
published++;
2020-02-10 17:34:14 +00:00
}
}
if (published != llarp::dht::IntroSetStorageRedundancy)
LogWarn(
"Publish introset failed: could only publish ",
published,
" copies but wanted ",
llarp::dht::IntroSetStorageRedundancy);
return published == llarp::dht::IntroSetStorageRedundancy;
2018-07-18 03:10:21 +00:00
}
2018-09-18 14:48:06 +00:00
struct PublishIntroSetJob : public IServiceLookup
{
2020-01-27 21:30:41 +00:00
EncryptedIntroSet m_IntroSet;
2018-09-18 14:48:06 +00:00
Endpoint* m_Endpoint;
uint64_t m_relayOrder;
PublishIntroSetJob(
Endpoint* parent,
uint64_t id,
EncryptedIntroSet introset,
uint64_t relayOrder,
llarp_time_t timeout)
: IServiceLookup(parent, id, "PublishIntroSet", timeout)
2019-07-30 23:42:13 +00:00
, m_IntroSet(std::move(introset))
2018-09-18 14:48:06 +00:00
, m_Endpoint(parent)
, m_relayOrder(relayOrder)
Config file improvements (#1397) * Config file API/comment improvements API improvements: ================= Make the config API use position-independent tag parameters (Required, Default{123}, MultiValue) rather than a sequence of bools with overloads. For example, instead of: conf.defineOption<int>("a", "b", false, true, 123, [] { ... }); you now write: conf.defineOption<int>("a", "b", MultiValue, Default{123}, [] { ... }); The tags are: - Required - MultiValue - Default{value} plus new abilities (see below): - Hidden - RelayOnly - ClientOnly - Comment{"line1", "line2", "line3"} Made option definition more powerful: ===================================== - `Hidden` allows you to define an option that won't show up in the generated config file if it isn't set. - `RelayOnly`/`ClientOnly` sets up an option that is only accepted and only shows up for relay or client configs. (If neither is specified the option shows up in both modes). - `Comment{...}` lets the option comments be specified as part of the defineOption. Comment improvements ==================== - Rewrote comments for various options to expand on details. - Inlined all the comments with the option definitions. - Several options that were missing comments got comments added. - Made various options for deprecated and or internal options hidden by default so that they don't show up in a default config file. - show the section comment (but not option comments) *after* the [section] tag instead of before it as it makes more sense that way (particularly for the [bind] section which has a new long comment to describe how it works). Disable profiling by default ============================ We had this weird state where we use and store profiling by default but never *load* it when starting up. This commit makes us just not use profiling at all unless explicitly enabled. Other misc changes: =================== - change default worker threads to 0 (= num cpus) instead of 1, and fix it to allow 0. - Actually apply worker-threads option - fixed default data-dir value erroneously having quotes around it - reordered ifname/ifaddr/mapaddr (was previously mapaddr/ifaddr/ifname) as mapaddr is a sort of specialization of ifaddr and so makes more sense to come after it (particularly because it now references ifaddr in its help message). - removed peer-stats option (since we always require it for relays and never use it for clients) - removed router profiles filename option (this doesn't need to be configurable) - removed defunct `service-node-seed` option - Change default logging output file to "" (which means stdout), and also made "-" work for stdout. * Router hive compilation fixes * Comments for SNApp SRV settings in ini file * Add extra blank line after section comments * Better deprecated option handling Allow {client,relay}-only options in {relay,client} configs to be specified as implicitly deprecated options: they warn, and don't set anything. Add an explicit `Deprecated` tag and move deprecated option handling into definition.cpp. * Move backwards compat options into section definitions Keep the "addBackwardsCompatibleConfigOptions" only for options in sections that no longer exist. * Fix INI parsing issues & C++17-ify - don't allow inline comments because it seems they aren't allowed in ini formats in general, and is going to cause problems if there is a comment character in a value (e.g. an exit auth string). Additionally it was breaking on a line such as: # some comment; see? because it was treating only `; see?` as the comment and then producing an error message about the rest of the line being invalid. - make section parsing stricter: the `[` and `]` have to be at the beginning at end of the line now (after stripping whitespace). - Move whitespace stripping to the top since everything in here does it. - chop off string_view suffix/prefix rather than maintaining position values - fix potential infinite loop/segfault when given a line such as `]foo[` * Make config parsing failure fatal Load() LogError's and returns false on failure, so we weren't aborting on config file errors. * Formatting: allow `{}` for empty functions/structs Instead of using two lines when empty: { } * Make default dns bind 127.0.0.1 on non-Linux * Don't show empty section; fix tests We can conceivably have sections that only make sense for clients or relays, and so want to completely omit that section if we have no options for the type of config being generated. Also fixes missing empty lines between tests. Co-authored-by: Thomas Winget <tewinget@gmail.com>
2020-10-07 22:22:58 +00:00
{}
2018-09-18 14:48:06 +00:00
std::shared_ptr<routing::IMessage>
2019-07-30 23:42:13 +00:00
BuildRequestMessage() override
2018-09-18 14:48:06 +00:00
{
auto msg = std::make_shared<routing::DHTMessage>();
msg->M.emplace_back(
std::make_unique<dht::PublishIntroMessage>(m_IntroSet, txid, true, m_relayOrder));
2018-09-18 14:48:06 +00:00
return msg;
}
bool
HandleIntrosetResponse(const std::set<EncryptedIntroSet>& response) override
2018-09-18 14:48:06 +00:00
{
if (not response.empty())
2018-09-18 14:48:06 +00:00
m_Endpoint->IntroSetPublished();
else
m_Endpoint->IntroSetPublishFail();
return true;
}
};
2018-07-18 03:10:21 +00:00
void
Endpoint::IntroSetPublishFail()
{
auto now = Now();
if (ShouldPublishDescriptors(now))
{
2019-11-05 16:58:53 +00:00
RegenAndPublishIntroSet();
}
else if (NumInStatus(path::ePathEstablished) < 3)
{
if (introSet().HasExpiredIntros(now))
ManualRebuild(1);
}
2018-09-18 14:48:06 +00:00
}
size_t
Endpoint::UniqueEndpoints() const
{
return m_state->m_RemoteSessions.size() + m_state->m_SNodeSessions.size();
}
constexpr auto PublishIntrosetTimeout = 20s;
2018-09-18 14:48:06 +00:00
bool
Endpoint::PublishIntroSetVia(
const EncryptedIntroSet& introset,
AbstractRouter* r,
path::Path_ptr path,
uint64_t relayOrder)
2018-09-18 14:48:06 +00:00
{
auto job =
new PublishIntroSetJob(this, GenTXID(), introset, relayOrder, PublishIntrosetTimeout);
if (job->SendRequestViaPath(path, r))
2018-09-18 14:48:06 +00:00
{
m_state->m_LastPublishAttempt = Now();
2018-09-18 14:48:06 +00:00
return true;
}
return false;
2018-07-18 03:10:21 +00:00
}
2019-05-07 17:46:38 +00:00
void
Endpoint::ResetInternalState()
{
path::Builder::ResetInternalState();
static auto resetState = [](auto& container, auto getter) {
std::for_each(container.begin(), container.end(), [getter](auto& item) {
getter(item)->ResetInternalState();
});
2019-05-07 17:46:38 +00:00
};
resetState(m_state->m_RemoteSessions, [](const auto& item) { return item.second; });
2021-04-07 12:03:04 +00:00
resetState(m_state->m_SNodeSessions, [](const auto& item) { return item.second; });
2019-05-07 17:46:38 +00:00
}
2018-07-18 03:10:21 +00:00
bool
2018-07-18 22:50:05 +00:00
Endpoint::ShouldPublishDescriptors(llarp_time_t now) const
2018-07-18 03:10:21 +00:00
{
if (not m_PublishIntroSet)
return false;
const auto lastEventAt = std::max(m_state->m_LastPublishAttempt, m_state->m_LastPublish);
const auto next_pub = lastEventAt
+ (m_state->m_IntroSet.HasStaleIntros(now, path::intro_stale_threshold)
? IntrosetPublishRetryCooldown
: IntrosetPublishInterval);
return now >= next_pub;
2018-07-18 03:10:21 +00:00
}
void
Endpoint::IntroSetPublished()
{
const auto now = Now();
// We usually get 4 confirmations back (one for each DHT location), which
// is noisy: suppress this log message if we already had a confirmation in
// the last second.
if (m_state->m_LastPublish < now - 1s)
LogInfo(Name(), " IntroSet publish confirmed");
else
LogDebug(Name(), " Additional IntroSet publish confirmed");
m_state->m_LastPublish = now;
2018-07-18 03:10:21 +00:00
}
std::optional<std::vector<RouterContact>>
Endpoint::GetHopsForBuild()
{
std::unordered_set<RouterID> exclude;
ForEachPath([&exclude](auto path) { exclude.insert(path->Endpoint()); });
const auto maybe =
m_router->nodedb()->GetRandom([exclude, r = m_router](const auto& rc) -> bool {
return exclude.count(rc.pubkey) == 0
and not r->routerProfiling().IsBadForPath(rc.pubkey);
});
if (not maybe.has_value())
return std::nullopt;
return GetHopsForBuildWithEndpoint(maybe->pubkey);
}
2019-05-10 16:19:33 +00:00
std::optional<std::vector<RouterContact>>
Endpoint::GetHopsForBuildWithEndpoint(RouterID endpoint)
2019-05-10 16:19:33 +00:00
{
return path::Builder::GetHopsAlignedToForBuild(endpoint, SnodeBlacklist());
2019-05-10 16:19:33 +00:00
}
void
Endpoint::PathBuildStarted(path::Path_ptr path)
{
path::Builder::PathBuildStarted(path);
}
constexpr auto MaxOutboundContextPerRemote = 1;
2018-07-22 23:14:29 +00:00
void
Endpoint::PutNewOutboundContext(const service::IntroSet& introset, llarp_time_t left)
2018-07-22 23:14:29 +00:00
{
const Address addr{introset.addressKeys.Addr()};
2018-07-22 23:14:29 +00:00
auto& remoteSessions = m_state->m_RemoteSessions;
if (remoteSessions.count(addr) < MaxOutboundContextPerRemote)
2018-07-22 23:14:29 +00:00
{
remoteSessions.emplace(addr, std::make_shared<OutboundContext>(introset, this));
LogInfo("Created New outbound context for ", addr.ToString());
2018-07-22 23:14:29 +00:00
}
auto sessionRange = remoteSessions.equal_range(addr);
for (auto itr = sessionRange.first; itr != sessionRange.second; ++itr)
2018-07-22 23:14:29 +00:00
{
itr->second->AddReadyHook(
[addr, this](auto session) { InformPathToService(addr, session); }, left);
2018-07-22 23:14:29 +00:00
}
}
void
Endpoint::HandleVerifyGotRouter(dht::GotRouterMessage_constptr msg, RouterID id, bool valid)
{
auto& pendingRouters = m_state->m_PendingRouters;
auto itr = pendingRouters.find(id);
if (itr != pendingRouters.end())
{
if (valid)
itr->second.InformResult(msg->foundRCs);
else
itr->second.InformResult({});
pendingRouters.erase(itr);
}
}
2018-08-10 21:34:11 +00:00
bool
2019-05-03 13:15:03 +00:00
Endpoint::HandleGotRouterMessage(dht::GotRouterMessage_constptr msg)
2018-08-10 21:34:11 +00:00
{
if (not msg->foundRCs.empty())
{
for (auto& rc : msg->foundRCs)
{
Router()->QueueWork([this, rc, msg]() mutable {
bool valid = rc.Verify(llarp::time_now_ms());
Router()->loop()->call([this, valid, rc = std::move(rc), msg] {
Router()->nodedb()->PutIfNewer(rc);
HandleVerifyGotRouter(msg, rc.pubkey, valid);
});
});
}
2019-05-03 13:15:03 +00:00
}
else
{
auto& routers = m_state->m_PendingRouters;
auto itr = routers.begin();
while (itr != routers.end())
{
if (itr->second.txid == msg->txid)
{
itr->second.InformResult({});
itr = routers.erase(itr);
}
else
++itr;
}
2018-08-10 21:34:11 +00:00
}
return true;
2018-08-10 21:34:11 +00:00
}
struct LookupNameJob : public IServiceLookup
{
std::function<void(std::optional<Address>)> handler;
ShortHash namehash;
LookupNameJob(
Endpoint* parent,
uint64_t id,
std::string lnsName,
std::function<void(std::optional<Address>)> resultHandler)
: IServiceLookup(parent, id, lnsName), handler(resultHandler)
{
CryptoManager::instance()->shorthash(
namehash, llarp_buffer_t(lnsName.c_str(), lnsName.size()));
}
std::shared_ptr<routing::IMessage>
BuildRequestMessage() override
{
auto msg = std::make_shared<routing::DHTMessage>();
msg->M.emplace_back(std::make_unique<dht::FindNameMessage>(
dht::Key_t{}, dht::Key_t{namehash.as_array()}, txid));
return msg;
}
bool
HandleNameResponse(std::optional<Address> addr) override
{
handler(addr);
return true;
}
void
HandleTimeout() override
{
HandleNameResponse(std::nullopt);
}
};
bool
Endpoint::HasExit() const
{
for (const auto& [name, info] : m_StartupLNSMappings)
{
if (info.first.has_value())
return true;
}
return not m_ExitMap.Empty();
}
path::Path::UniqueEndpointSet_t
Endpoint::GetUniqueEndpointsForLookup() const
{
path::Path::UniqueEndpointSet_t paths;
ForEachPath([&paths](auto path) {
if (path and path->IsReady())
paths.insert(path);
});
return paths;
}
2022-10-25 00:39:05 +00:00
bool
Endpoint::ReadyForNetwork() const
2022-10-25 00:39:05 +00:00
{
return IsReady() and ReadyToDoLookup(GetUniqueEndpointsForLookup().size());
}
2022-10-25 00:39:05 +00:00
bool
Endpoint::ReadyToDoLookup(size_t num_paths) const
{
// Currently just checks the number of paths, but could do more checks in the future.
return num_paths >= MIN_ENDPOINTS_FOR_LNS_LOOKUP;
2022-10-25 00:39:05 +00:00
}
void
Endpoint::LookupNameAsync(
std::string name,
2021-03-20 19:18:04 +00:00
std::function<void(std::optional<std::variant<Address, RouterID>>)> handler)
{
if (not NameIsValid(name))
{
2022-04-16 16:41:34 +00:00
handler(ParseAddress(name));
return;
}
auto& cache = m_state->nameCache;
const auto maybe = cache.Get(name);
if (maybe.has_value())
{
handler(maybe);
return;
}
LogInfo(Name(), " looking up LNS name: ", name);
auto paths = GetUniqueEndpointsForLookup();
// not enough paths
2022-10-25 00:39:05 +00:00
if (not ReadyToDoLookup(paths.size()))
{
2021-04-12 11:39:07 +00:00
LogWarn(
Name(),
" not enough paths for lns lookup, have ",
paths.size(),
" need ",
2022-10-25 00:39:05 +00:00
MIN_ENDPOINTS_FOR_LNS_LOOKUP);
handler(std::nullopt);
return;
}
auto maybeInvalidateCache = [handler, &cache, name](auto result) {
if (result)
{
var::visit(
2021-05-03 23:42:13 +00:00
[&result, &cache, name](auto&& value) {
if (value.IsZero())
{
cache.Remove(name);
result = std::nullopt;
}
},
*result);
}
2021-03-19 14:13:03 +00:00
if (result)
{
cache.Put(name, *result);
}
handler(result);
};
2022-10-25 00:39:05 +00:00
constexpr size_t max_lns_lookup_endpoints = 7;
// pick up to max_unique_lns_endpoints random paths to do lookups from
std::vector<path::Path_ptr> chosenpaths;
chosenpaths.insert(chosenpaths.begin(), paths.begin(), paths.end());
std::shuffle(chosenpaths.begin(), chosenpaths.end(), CSRNG{});
2022-10-25 00:39:05 +00:00
chosenpaths.resize(std::min(paths.size(), max_lns_lookup_endpoints));
auto resultHandler =
m_state->lnsTracker.MakeResultHandler(name, chosenpaths.size(), maybeInvalidateCache);
for (const auto& path : chosenpaths)
{
LogInfo(Name(), " lookup ", name, " from ", path->Endpoint());
auto job = new LookupNameJob{this, GenTXID(), name, resultHandler};
job->SendRequestViaPath(path, m_router);
}
}
bool
Endpoint::HandleGotNameMessage(std::shared_ptr<const dht::GotNameMessage> msg)
{
auto& lookups = m_state->m_PendingLookups;
auto itr = lookups.find(msg->TxID);
if (itr == lookups.end())
return false;
// decrypt entry
const auto maybe = msg->result.Decrypt(itr->second->name);
// inform result
itr->second->HandleNameResponse(maybe);
lookups.erase(itr);
return true;
}
2018-08-10 21:34:11 +00:00
void
Endpoint::EnsureRouterIsKnown(const RouterID& router)
{
if (router.IsZero())
2018-08-14 22:07:58 +00:00
return;
if (!Router()->nodedb()->Has(router))
2018-08-10 21:34:11 +00:00
{
2019-05-03 13:15:03 +00:00
LookupRouterAnon(router, nullptr);
2018-12-19 17:48:29 +00:00
}
}
2018-08-10 21:34:11 +00:00
2018-12-19 17:48:29 +00:00
bool
2019-05-03 13:15:03 +00:00
Endpoint::LookupRouterAnon(RouterID router, RouterLookupHandler handler)
2018-12-19 17:48:29 +00:00
{
using llarp::dht::FindRouterMessage;
auto& routers = m_state->m_PendingRouters;
if (routers.find(router) == routers.end())
2018-12-19 17:48:29 +00:00
{
auto path = GetEstablishedPathClosestTo(router);
routing::DHTMessage msg;
auto txid = GenTXID();
msg.M.emplace_back(std::make_unique<FindRouterMessage>(txid, router));
if (path)
msg.S = path->NextSeqNo();
if (path && path->SendRoutingMessage(msg, Router()))
2018-12-19 17:48:29 +00:00
{
RouterLookupJob job{this, [handler, router, nodedb = m_router->nodedb()](auto results) {
if (results.empty())
{
LogInfo("could not find ", router, ", remove it from nodedb");
nodedb->Remove(router);
}
if (handler)
handler(results);
}};
assert(msg.M.size() == 1);
auto dhtMsg = dynamic_cast<FindRouterMessage*>(msg.M[0].get());
assert(dhtMsg != nullptr);
m_router->NotifyRouterEvent<tooling::FindRouterSentEvent>(m_router->pubkey(), *dhtMsg);
routers.emplace(router, std::move(job));
2018-12-19 17:48:29 +00:00
return true;
2018-08-10 21:34:11 +00:00
}
}
2018-12-19 17:48:29 +00:00
return false;
2018-08-10 21:34:11 +00:00
}
void
Endpoint::HandlePathBuilt(path::Path_ptr p)
{
2019-06-02 21:19:10 +00:00
p->SetDataHandler(util::memFn(&Endpoint::HandleHiddenServiceFrame, this));
p->SetDropHandler(util::memFn(&Endpoint::HandleDataDrop, this));
p->SetDeadChecker(util::memFn(&Endpoint::CheckPathIsDead, this));
path::Builder::HandlePathBuilt(p);
}
bool
Endpoint::HandleDataDrop(path::Path_ptr p, const PathID_t& dst, uint64_t seq)
{
LogWarn(Name(), " message ", seq, " dropped by endpoint ", p->Endpoint(), " via ", dst);
return true;
}
std::unordered_map<std::string, std::string>
Endpoint::NotifyParams() const
{
return {{"LOKINET_ADDR", m_Identity.pub.Addr().ToString()}};
}
2019-11-28 23:08:02 +00:00
void
Endpoint::FlushRecvData()
{
while (auto maybe = m_RecvQueue.tryPopFront())
2019-11-28 23:08:02 +00:00
{
auto& ev = *maybe;
2019-11-28 23:08:02 +00:00
ProtocolMessage::ProcessAsync(ev.fromPath, ev.pathid, ev.msg);
}
2019-11-28 23:08:02 +00:00
}
void
Endpoint::QueueRecvData(RecvDataEvent ev)
{
m_RecvQueue.tryPushBack(std::move(ev));
2021-11-14 19:42:15 +00:00
Router()->TriggerPump();
2019-11-28 23:08:02 +00:00
}
bool
Endpoint::HandleDataMessage(
path::Path_ptr p, const PathID_t from, std::shared_ptr<ProtocolMessage> msg)
{
2021-06-04 23:50:54 +00:00
PutSenderFor(msg->tag, msg->sender, true);
Introduction intro = msg->introReply;
if (HasInboundConvo(msg->sender.Addr()))
{
intro.pathID = from;
intro.router = p->Endpoint();
}
PutReplyIntroFor(msg->tag, intro);
ConvoTagRX(msg->tag);
2018-09-18 17:48:26 +00:00
return ProcessDataMessage(msg);
}
2018-11-29 14:01:13 +00:00
bool
2019-07-01 13:44:25 +00:00
Endpoint::HasPathToSNode(const RouterID ident) const
2018-11-29 14:01:13 +00:00
{
auto range = m_state->m_SNodeSessions.equal_range(ident);
auto itr = range.first;
while (itr != range.second)
{
2021-04-07 12:03:04 +00:00
if (itr->second->IsReady())
2018-11-29 14:01:13 +00:00
{
return true;
}
++itr;
}
return false;
}
EndpointBase::AddressVariant_t
Endpoint::LocalAddress() const
{
return m_Identity.pub.Addr();
}
2022-10-20 22:23:14 +00:00
std::optional<EndpointBase::SendStat>
Endpoint::GetStatFor(AddressVariant_t) const
{
// TODO: implement me
return std::nullopt;
}
std::unordered_set<EndpointBase::AddressVariant_t>
Endpoint::AllRemoteEndpoints() const
{
std::unordered_set<AddressVariant_t> remote;
for (const auto& item : Sessions())
{
remote.insert(item.second.remote.Addr());
}
for (const auto& item : m_state->m_SNodeSessions)
{
remote.insert(item.first);
}
return remote;
}
2018-11-29 13:12:35 +00:00
bool
Endpoint::ProcessDataMessage(std::shared_ptr<ProtocolMessage> msg)
2018-11-29 13:12:35 +00:00
{
2021-03-08 20:48:11 +00:00
if ((msg->proto == ProtocolType::Exit
&& (m_state->m_ExitEnabled || m_ExitMap.ContainsValue(msg->sender.Addr())))
|| msg->proto == ProtocolType::TrafficV4 || msg->proto == ProtocolType::TrafficV6
|| (msg->proto == ProtocolType::QUIC and m_quic))
2018-11-29 13:12:35 +00:00
{
m_InboundTrafficQueue.tryPushBack(std::move(msg));
Router()->TriggerPump();
2019-05-22 17:47:33 +00:00
return true;
2018-11-29 13:12:35 +00:00
}
2021-03-08 20:48:11 +00:00
if (msg->proto == ProtocolType::Control)
2018-11-29 13:12:35 +00:00
{
// TODO: implement me (?)
// right now it's just random noise
2018-11-29 13:12:35 +00:00
return true;
}
return false;
}
2020-05-28 11:07:32 +00:00
void
2020-06-17 13:07:05 +00:00
Endpoint::AsyncProcessAuthMessage(
std::shared_ptr<ProtocolMessage> msg, std::function<void(AuthResult)> hook)
2020-05-28 11:07:32 +00:00
{
if (m_AuthPolicy)
{
2021-01-01 18:55:31 +00:00
if (not m_AuthPolicy->AsyncAuthPending(msg->tag))
{
// do 1 authentication attempt and drop everything else
m_AuthPolicy->AuthenticateAsync(std::move(msg), std::move(hook));
}
2020-05-28 11:07:32 +00:00
}
else
{
2021-03-02 15:23:38 +00:00
Router()->loop()->call([h = std::move(hook)] { h({AuthResultCode::eAuthAccepted, "OK"}); });
2020-05-28 11:21:47 +00:00
}
}
void
2021-01-01 18:55:31 +00:00
Endpoint::SendAuthResult(
2020-05-28 11:21:47 +00:00
path::Path_ptr path, PathID_t replyPath, ConvoTag tag, AuthResult result)
{
// not applicable because we are not an exit or don't have an endpoint auth policy
if ((not m_state->m_ExitEnabled) or m_AuthPolicy == nullptr)
return;
ProtocolFrame f{};
2021-02-24 18:41:23 +00:00
f.R = AuthResultCodeAsInt(result.code);
2020-05-28 11:21:47 +00:00
f.T = tag;
f.F = path->intro.pathID;
f.N.Randomize();
2021-02-24 12:14:15 +00:00
if (result.code == AuthResultCode::eAuthAccepted)
2021-01-01 18:55:31 +00:00
{
ProtocolMessage msg;
2021-02-24 12:14:15 +00:00
std::vector<byte_t> reason{};
reason.resize(result.reason.size());
std::copy_n(result.reason.c_str(), reason.size(), reason.data());
msg.PutBuffer(reason);
if (m_AuthPolicy)
msg.proto = ProtocolType::Auth;
else
msg.proto = ProtocolType::Control;
2021-01-01 18:55:31 +00:00
if (not GetReplyIntroFor(tag, msg.introReply))
{
LogError("Failed to send auth reply: no reply intro");
return;
}
msg.sender = m_Identity.pub;
SharedSecret sessionKey{};
if (not GetCachedSessionKeyFor(tag, sessionKey))
{
LogError("failed to send auth reply: no cached session key");
return;
}
if (not f.EncryptAndSign(msg, sessionKey, m_Identity))
{
LogError("Failed to encrypt and sign auth reply");
return;
}
}
else
2020-05-28 11:21:47 +00:00
{
2021-01-01 18:55:31 +00:00
if (not f.Sign(m_Identity))
{
LogError("failed to sign auth reply result");
return;
}
2020-05-28 11:07:32 +00:00
}
2021-01-01 18:55:31 +00:00
m_SendQueue.tryPushBack(
SendEvent_t{std::make_shared<routing::PathTransferMessage>(f, replyPath), path});
2020-05-28 11:07:32 +00:00
}
2019-03-08 16:00:45 +00:00
void
Endpoint::RemoveConvoTag(const ConvoTag& t)
{
Sessions().erase(t);
2019-03-08 16:00:45 +00:00
}
void
Endpoint::ResetConvoTag(ConvoTag tag, path::Path_ptr p, PathID_t from)
{
// send reset convo tag message
ProtocolFrame f{};
f.R = 1;
f.T = tag;
f.F = p->intro.pathID;
f.Sign(m_Identity);
{
LogWarn("invalidating convotag T=", tag);
RemoveConvoTag(tag);
m_SendQueue.tryPushBack(
SendEvent_t{std::make_shared<routing::PathTransferMessage>(f, from), p});
}
}
bool
Endpoint::HandleHiddenServiceFrame(path::Path_ptr p, const ProtocolFrame& frame)
{
if (frame.R)
2019-03-08 16:00:45 +00:00
{
// handle discard
ServiceInfo si;
if (!GetSenderFor(frame.T, si))
2019-03-08 16:00:45 +00:00
return false;
// verify source
if (!frame.Verify(si))
2019-03-08 16:00:45 +00:00
return false;
// remove convotag it doesn't exist
LogWarn("remove convotag T=", frame.T, " R=", frame.R, " from ", si.Addr());
RemoveConvoTag(frame.T);
2019-03-08 16:00:45 +00:00
return true;
}
if (not frame.AsyncDecryptAndVerify(Router()->loop(), p, m_Identity, this))
{
ResetConvoTag(frame.T, p, frame.F);
}
return true;
}
2020-11-04 16:08:29 +00:00
void
Endpoint::HandlePathDied(path::Path_ptr p)
2018-09-17 15:32:37 +00:00
{
m_router->routerProfiling().MarkPathTimeout(p.get());
ManualRebuild(1);
2020-11-04 16:08:29 +00:00
path::Builder::HandlePathDied(p);
RegenAndPublishIntroSet();
2019-03-30 13:02:10 +00:00
}
bool
Endpoint::CheckPathIsDead(path::Path_ptr, llarp_time_t dlt)
{
2019-04-05 14:58:22 +00:00
return dlt > path::alive_timeout;
}
2018-08-10 21:34:11 +00:00
bool
Endpoint::OnLookup(
const Address& addr,
std::optional<IntroSet> introset,
const RouterID& endpoint,
llarp_time_t timeLeft,
uint64_t relayOrder)
2018-08-10 21:34:11 +00:00
{
// tell all our existing remote sessions about this introset update
2019-08-02 09:27:27 +00:00
const auto now = Router()->Now();
auto& lookups = m_state->m_PendingServiceLookups;
if (introset)
{
auto& sessions = m_state->m_RemoteSessions;
auto range = sessions.equal_range(addr);
auto itr = range.first;
while (itr != range.second)
{
itr->second->OnIntroSetUpdate(addr, introset, endpoint, timeLeft, relayOrder);
// we got a successful lookup
if (itr->second->ReadyToSend() and not introset->IsExpired(now))
{
// inform all lookups
auto lookup_range = lookups.equal_range(addr);
auto i = lookup_range.first;
while (i != lookup_range.second)
{
i->second(addr, itr->second.get());
++i;
}
lookups.erase(addr);
}
++itr;
}
}
auto& fails = m_state->m_ServiceLookupFails;
if (not introset or introset->IsExpired(now))
{
2021-06-03 16:31:58 +00:00
LogError(
Name(),
" failed to lookup ",
addr.ToString(),
" from ",
endpoint,
" order=",
relayOrder);
fails[endpoint] = fails[endpoint] + 1;
const auto pendingForAddr = std::count_if(
m_state->m_PendingLookups.begin(),
m_state->m_PendingLookups.end(),
[addr](const auto& item) -> bool { return item.second->IsFor(addr); });
// inform all if we have no more pending lookups for this address
if (pendingForAddr == 0)
{
auto range = lookups.equal_range(addr);
auto itr = range.first;
while (itr != range.second)
{
itr->second(addr, nullptr);
itr = lookups.erase(itr);
}
}
2018-08-10 21:34:11 +00:00
return false;
}
2020-03-02 16:56:47 +00:00
// check for established outbound context
if (m_state->m_RemoteSessions.count(addr) > 0)
2020-03-02 16:56:47 +00:00
return true;
PutNewOutboundContext(*introset, timeLeft);
2018-08-10 21:34:11 +00:00
return true;
}
2020-02-18 16:00:45 +00:00
void
2021-06-05 13:06:17 +00:00
Endpoint::MarkAddressOutbound(AddressVariant_t addr)
2020-02-18 16:00:45 +00:00
{
2021-06-05 13:06:17 +00:00
if (auto* ptr = std::get_if<Address>(&addr))
m_state->m_OutboundSessions.insert(*ptr);
2020-02-18 16:00:45 +00:00
}
bool
Endpoint::WantsOutboundSession(const Address& addr) const
{
return m_state->m_OutboundSessions.count(addr) > 0;
}
void
Endpoint::InformPathToService(const Address remote, OutboundContext* ctx)
{
auto& serviceLookups = m_state->m_PendingServiceLookups;
auto range = serviceLookups.equal_range(remote);
auto itr = range.first;
while (itr != range.second)
{
itr->second(remote, ctx);
++itr;
}
serviceLookups.erase(remote);
}
2018-07-19 04:58:39 +00:00
bool
Endpoint::EnsurePathToService(const Address remote, PathEnsureHook hook, llarp_time_t timeout)
2018-07-19 04:58:39 +00:00
{
if (not WantsOutboundSession(remote))
{
// we don't want to ensure paths to addresses that are inbound
// inform fail right away in that case
hook(remote, nullptr);
return false;
}
/// how many routers to use for lookups
2020-03-02 16:17:50 +00:00
static constexpr size_t NumParallelLookups = 2;
/// how many requests per router
static constexpr size_t RequestsPerLookup = 2;
// add response hook to list for address.
m_state->m_PendingServiceLookups.emplace(remote, hook);
2020-02-18 16:00:45 +00:00
auto& sessions = m_state->m_RemoteSessions;
{
auto range = sessions.equal_range(remote);
auto itr = range.first;
while (itr != range.second)
{
if (itr->second->ReadyToSend())
{
InformPathToService(remote, itr->second.get());
return true;
}
++itr;
}
}
/// check replay filter
if (not m_IntrosetLookupFilter.Insert(remote))
return true;
const auto paths = GetManyPathsWithUniqueEndpoints(this, NumParallelLookups);
using namespace std::placeholders;
2020-02-10 17:52:24 +00:00
const dht::Key_t location = remote.ToKey();
uint64_t order = 0;
// flag to only add callback to list of callbacks for
// address once.
bool hookAdded = false;
for (const auto& path : paths)
{
for (size_t count = 0; count < RequestsPerLookup; ++count)
2020-02-10 17:52:24 +00:00
{
2020-03-02 16:12:29 +00:00
HiddenServiceAddressLookup* job = new HiddenServiceAddressLookup(
this,
[this](auto addr, auto result, auto from, auto left, auto order) {
return OnLookup(addr, result, from, left, order);
},
location,
PubKey{remote.as_array()},
path->Endpoint(),
order,
GenTXID(),
2021-06-17 12:08:46 +00:00
timeout + (2 * path->intro.latency) + IntrosetLookupGraceInterval);
LogInfo(
"doing lookup for ",
remote,
" via ",
path->Endpoint(),
" at ",
location,
" order=",
order);
2020-03-02 16:12:29 +00:00
order++;
if (job->SendRequestViaPath(path, Router()))
2020-03-02 16:12:29 +00:00
{
hookAdded = true;
2020-03-02 16:12:29 +00:00
}
else
LogError(Name(), " send via path failed for lookup");
2020-02-10 17:52:24 +00:00
}
}
return hookAdded;
2018-07-19 04:58:39 +00:00
}
void
Endpoint::SRVRecordsChanged()
{
auto& introset = introSet();
introset.SRVs.clear();
for (const auto& srv : SRVRecords())
introset.SRVs.emplace_back(srv.toTuple());
RegenAndPublishIntroSet();
}
bool
2019-07-01 13:44:25 +00:00
Endpoint::EnsurePathToSNode(const RouterID snode, SNodeEnsureHook h)
2018-11-29 13:12:35 +00:00
{
auto& nodeSessions = m_state->m_SNodeSessions;
using namespace std::placeholders;
if (nodeSessions.count(snode) == 0)
2018-11-29 13:12:35 +00:00
{
const auto src = xhtonl(net::TruncateV6(GetIfAddr()));
const auto dst = xhtonl(net::TruncateV6(ObtainIPForAddr(snode)));
auto session = std::make_shared<exit::SNodeSession>(
snode,
[=](const llarp_buffer_t& buf) -> bool {
net::IPPacket pkt;
if (not pkt.Load(buf))
return false;
pkt.UpdateIPv4Address(src, dst);
/// TODO: V6
2021-04-07 12:03:04 +00:00
auto itr = m_state->m_SNodeSessions.find(snode);
if (itr == m_state->m_SNodeSessions.end())
return false;
if (const auto maybe = itr->second->CurrentPath())
return HandleInboundPacket(
ConvoTag{maybe->as_array()}, pkt.ConstBuffer(), ProtocolType::TrafficV4, 0);
return false;
2019-07-01 13:44:25 +00:00
},
Router(),
2021-04-07 12:03:04 +00:00
1,
numHops,
false,
this);
2021-04-07 12:03:04 +00:00
m_state->m_SNodeSessions[snode] = session;
2018-11-29 13:12:35 +00:00
}
2019-04-30 21:36:27 +00:00
EnsureRouterIsKnown(snode);
auto range = nodeSessions.equal_range(snode);
auto itr = range.first;
while (itr != range.second)
{
2021-04-07 12:03:04 +00:00
if (itr->second->IsReady())
h(snode, itr->second, ConvoTag{itr->second->CurrentPath()->as_array()});
2019-03-07 15:17:29 +00:00
else
2019-04-30 21:36:27 +00:00
{
2021-04-07 12:03:04 +00:00
itr->second->AddReadyHook([h, snode](auto session) {
if (session)
{
h(snode, session, ConvoTag{session->CurrentPath()->as_array()});
}
else
{
h(snode, nullptr, ConvoTag{});
}
});
if (not itr->second->BuildCooldownHit(Now()))
itr->second->BuildOne();
2019-04-30 21:36:27 +00:00
}
++itr;
}
return true;
2018-11-29 13:12:35 +00:00
}
2021-03-12 17:41:48 +00:00
bool
Endpoint::SendToOrQueue(ConvoTag tag, const llarp_buffer_t& pkt, ProtocolType t)
2021-03-12 17:41:48 +00:00
{
if (tag.IsZero())
{
LogWarn("SendToOrQueue failed: convo tag is zero");
return false;
}
LogDebug(Name(), " send ", pkt.sz, " bytes on T=", tag);
if (auto maybe = GetEndpointWithConvoTag(tag))
{
if (auto* ptr = std::get_if<Address>(&*maybe))
{
if (*ptr == m_Identity.pub.Addr())
{
ConvoTagTX(tag);
m_state->m_Router->TriggerPump();
if (not HandleInboundPacket(tag, pkt, t, 0))
return false;
ConvoTagRX(tag);
return true;
}
}
if (not SendToOrQueue(*maybe, pkt, t))
return false;
return true;
}
LogDebug("SendToOrQueue failed: no endpoint for convo tag ", tag);
return false;
2021-03-12 17:41:48 +00:00
}
2018-11-29 13:12:35 +00:00
bool
Endpoint::SendToOrQueue(const RouterID& addr, const llarp_buffer_t& buf, ProtocolType t)
2018-11-29 13:12:35 +00:00
{
LogTrace("SendToOrQueue: sending to snode ", addr);
auto pkt = std::make_shared<net::IPPacket>();
if (!pkt->Load(buf))
2018-11-29 13:12:35 +00:00
return false;
EnsurePathToSNode(
addr, [this, t, pkt = std::move(pkt)](RouterID, exit::BaseSession_ptr s, ConvoTag) {
if (s)
{
s->SendPacketToRemote(pkt->ConstBuffer(), t);
Router()->TriggerPump();
}
});
2019-04-30 21:36:27 +00:00
return true;
2018-11-29 13:12:35 +00:00
}
2021-11-14 19:12:29 +00:00
void
Endpoint::Pump(llarp_time_t now)
2019-04-25 17:15:56 +00:00
{
FlushRecvData();
// send downstream packets to user for snode
for (const auto& [router, session] : m_state->m_SNodeSessions)
2021-04-07 12:03:04 +00:00
session->FlushDownstream();
// handle inbound traffic sorted
util::ascending_priority_queue<ProtocolMessage> queue;
while (not m_InboundTrafficQueue.empty())
{
// succ it out
queue.emplace(std::move(*m_InboundTrafficQueue.popFront()));
}
while (not queue.empty())
{
const auto& msg = queue.top();
LogDebug(
Name(),
" handle inbound packet on ",
msg.tag,
" ",
msg.payload.size(),
" bytes seqno=",
msg.seqno);
2021-04-01 11:13:39 +00:00
if (HandleInboundPacket(msg.tag, msg.payload, msg.proto, msg.seqno))
{
ConvoTagRX(msg.tag);
2021-04-01 11:13:39 +00:00
}
else
{
LogWarn("Failed to handle inbound message");
}
queue.pop();
}
2019-04-30 16:07:17 +00:00
auto router = Router();
// TODO: locking on this container
for (const auto& [addr, outctx] : m_state->m_RemoteSessions)
2021-11-14 19:12:29 +00:00
{
outctx->FlushUpstream();
2021-11-14 19:12:29 +00:00
outctx->Pump(now);
}
// TODO: locking on this container
for (const auto& [router, session] : m_state->m_SNodeSessions)
2021-04-07 12:03:04 +00:00
session->FlushUpstream();
// send queue flush
while (not m_SendQueue.empty())
2019-09-19 20:28:12 +00:00
{
SendEvent_t item = m_SendQueue.popFront();
item.first->S = item.second->NextSeqNo();
if (item.second->SendRoutingMessage(*item.first, router))
ConvoTagTX(item.first->T.T);
2019-09-19 20:28:12 +00:00
}
2019-12-30 13:20:50 +00:00
UpstreamFlush(router);
2019-04-25 17:15:56 +00:00
}
2021-02-17 19:26:39 +00:00
std::optional<ConvoTag>
2021-03-16 19:17:02 +00:00
Endpoint::GetBestConvoTagFor(std::variant<Address, RouterID> remote) const
2021-02-17 19:26:39 +00:00
{
// get convotag with lowest estimated RTT
2021-03-16 19:17:02 +00:00
if (auto ptr = std::get_if<Address>(&remote))
2021-02-17 19:26:39 +00:00
{
llarp_time_t rtt = 30s;
2021-03-16 19:17:02 +00:00
std::optional<ConvoTag> ret = std::nullopt;
for (const auto& [tag, session] : Sessions())
2021-02-17 19:26:39 +00:00
{
if (tag.IsZero())
continue;
if (session.remote.Addr() == *ptr)
2021-03-16 19:17:02 +00:00
{
if (*ptr == m_Identity.pub.Addr())
{
return tag;
}
if (session.inbound)
{
auto path = GetPathByRouter(session.replyIntro.router);
// if we have no path to the remote router that's fine still use it just in case this
// is the ONLY one we have
if (path == nullptr)
{
ret = tag;
continue;
}
if (path and path->IsReady())
{
const auto rttEstimate = (session.replyIntro.latency + path->intro.latency) * 2;
if (rttEstimate < rtt)
{
ret = tag;
rtt = rttEstimate;
}
}
}
else
{
auto range = m_state->m_RemoteSessions.equal_range(*ptr);
auto itr = range.first;
while (itr != range.second)
{
if (itr->second->ReadyToSend() and itr->second->estimatedRTT > 0s)
{
if (itr->second->estimatedRTT < rtt)
{
ret = tag;
rtt = itr->second->estimatedRTT;
}
}
itr++;
}
}
2021-03-16 19:17:02 +00:00
}
2021-02-17 19:26:39 +00:00
}
2021-03-16 19:17:02 +00:00
return ret;
2021-02-17 19:26:39 +00:00
}
2021-04-07 12:03:04 +00:00
if (auto* ptr = std::get_if<RouterID>(&remote))
2021-03-16 19:17:02 +00:00
{
2021-04-07 12:03:04 +00:00
auto itr = m_state->m_SNodeSessions.find(*ptr);
if (itr == m_state->m_SNodeSessions.end())
return std::nullopt;
if (auto maybe = itr->second->CurrentPath())
return ConvoTag{maybe->as_array()};
2021-03-16 19:17:02 +00:00
}
return std::nullopt;
2021-02-17 19:26:39 +00:00
}
bool
Endpoint::EnsurePathTo(
std::variant<Address, RouterID> addr,
std::function<void(std::optional<ConvoTag>)> hook,
llarp_time_t timeout)
{
if (auto ptr = std::get_if<Address>(&addr))
{
if (*ptr == m_Identity.pub.Addr())
{
ConvoTag tag{};
if (auto maybe = GetBestConvoTagFor(*ptr))
tag = *maybe;
else
tag.Randomize();
PutSenderFor(tag, m_Identity.pub, true);
ConvoTagTX(tag);
Sessions()[tag].forever = true;
Loop()->call_soon([tag, hook]() { hook(tag); });
return true;
}
if (not WantsOutboundSession(*ptr))
{
// we don't want to connect back to inbound sessions
hook(std::nullopt);
return true;
}
return EnsurePathToService(
*ptr,
[hook](auto, auto* ctx) {
if (ctx)
{
hook(ctx->currentConvoTag);
}
else
{
hook(std::nullopt);
}
},
timeout);
}
if (auto ptr = std::get_if<RouterID>(&addr))
{
return EnsurePathToSNode(*ptr, [hook](auto, auto session, auto tag) {
if (session)
{
hook(tag);
}
else
{
hook(std::nullopt);
}
});
}
return false;
}
2018-08-22 15:52:10 +00:00
bool
Endpoint::SendToOrQueue(const Address& remote, const llarp_buffer_t& data, ProtocolType t)
2018-08-22 15:52:10 +00:00
{
LogTrace("SendToOrQueue: sending to address ", remote);
if (data.sz == 0)
{
LogTrace("SendToOrQueue: dropping because data.sz == 0");
2019-11-29 00:37:58 +00:00
return false;
}
if (HasInboundConvo(remote))
{
// inbound conversation
LogTrace("Have inbound convo");
auto transfer = std::make_shared<routing::PathTransferMessage>();
ProtocolFrame& f = transfer->T;
f.R = 0;
std::shared_ptr<path::Path> p;
2021-03-16 19:17:02 +00:00
if (const auto maybe = GetBestConvoTagFor(remote))
{
2019-06-28 14:12:20 +00:00
// the remote guy's intro
Introduction replyIntro;
SharedSecret K;
2021-02-17 19:26:39 +00:00
const auto tag = *maybe;
if (not GetCachedSessionKeyFor(tag, K))
{
LogError(Name(), " no cached key for inbound session from ", remote, " T=", tag);
2021-02-17 19:26:39 +00:00
return false;
}
if (not GetReplyIntroFor(tag, replyIntro))
{
LogError(Name(), "no reply intro for inbound session from ", remote, " T=", tag);
2021-02-17 19:26:39 +00:00
return false;
}
// get path for intro
auto p = GetPathByRouter(replyIntro.router);
if (not p)
{
LogWarn(
Name(),
" has no path for intro router ",
RouterID{replyIntro.router},
" for inbound convo T=",
tag);
return false;
}
2021-02-17 19:26:39 +00:00
f.T = tag;
// TODO: check expiration of our end
auto m = std::make_shared<ProtocolMessage>(f.T);
m->PutBuffer(data);
f.N.Randomize();
f.C.Zero();
f.R = 0;
transfer->Y.Randomize();
m->proto = t;
m->introReply = p->intro;
m->sender = m_Identity.pub;
if (auto maybe = GetSeqNoForConvo(f.T))
{
m->seqno = *maybe;
}
else
{
LogWarn(Name(), " could not set sequence number, no session T=", f.T);
return false;
}
f.S = m->seqno;
f.F = p->intro.pathID;
transfer->P = replyIntro.pathID;
Router()->QueueWork([transfer, p, m, K, this]() {
if (not transfer->T.EncryptAndSign(*m, K, m_Identity))
{
LogError("failed to encrypt and sign for sessionn T=", transfer->T.T);
return;
}
m_SendQueue.tryPushBack(SendEvent_t{transfer, p});
Router()->TriggerPump();
});
return true;
}
else
{
LogWarn(
Name(),
" SendToOrQueue on inbound convo from ",
remote,
" but get-best returned none; bug?");
}
}
if (not WantsOutboundSession(remote))
{
LogWarn(
Name(),
" SendToOrQueue on outbound session we did not mark as outbound (remote=",
remote,
")");
return false;
}
// Failed to find a suitable inbound convo, look for outbound
LogTrace("Not an inbound convo");
auto& sessions = m_state->m_RemoteSessions;
auto range = sessions.equal_range(remote);
for (auto itr = range.first; itr != range.second; ++itr)
{
if (itr->second->ReadyToSend())
{
LogTrace("Found an outbound session to use to reach ", remote);
itr->second->AsyncEncryptAndSendTo(data, t);
return true;
}
}
LogTrace("Making an outbound session and queuing the data");
// add pending traffic
auto& traffic = m_state->m_PendingTraffic;
traffic[remote].emplace_back(data, t);
EnsurePathToService(
remote,
[this](Address addr, OutboundContext* ctx) {
if (ctx)
{
for (auto& pending : m_state->m_PendingTraffic[addr])
{
ctx->AsyncEncryptAndSendTo(pending.Buffer(), pending.protocol);
}
}
else
{
LogWarn("no path made to ", addr);
}
m_state->m_PendingTraffic.erase(addr);
},
PathAlignmentTimeout());
return true;
}
2018-08-22 15:52:10 +00:00
bool
Endpoint::SendToOrQueue(
const std::variant<Address, RouterID>& addr, const llarp_buffer_t& data, ProtocolType t)
{
return var::visit([&](auto& addr) { return SendToOrQueue(addr, data, t); }, addr);
}
2019-03-08 17:00:13 +00:00
bool
Endpoint::HasConvoTag(const ConvoTag& t) const
{
return Sessions().find(t) != Sessions().end();
2019-03-08 17:00:13 +00:00
}
std::optional<uint64_t>
2018-08-09 19:02:17 +00:00
Endpoint::GetSeqNoForConvo(const ConvoTag& tag)
{
auto itr = Sessions().find(tag);
if (itr == Sessions().end())
return std::nullopt;
return itr->second.seqno++;
2018-08-09 19:02:17 +00:00
}
2019-03-08 14:36:24 +00:00
bool
Endpoint::ShouldBuildMore(llarp_time_t now) const
{
if (BuildCooldownHit(now))
return false;
const auto requiredPaths = std::max(numDesiredPaths, path::min_intro_paths);
if (NumInStatus(path::ePathBuilding) >= requiredPaths)
2019-11-05 16:58:53 +00:00
return false;
return NumPathsExistingAt(now + (path::default_lifetime - path::intro_path_spread))
< requiredPaths;
}
AbstractRouter*
Endpoint::Router()
{
return m_state->m_Router;
}
const EventLoop_ptr&
Endpoint::Loop()
Replace libuv with uvw & related refactoring - removes all the llarp_ev_* functions, replacing with methods/classes/functions in the llarp namespace. - banish ev/ev.h to the void - Passes various things by const lvalue ref, especially shared_ptr's that don't need to be copied (to avoid an atomic refcount increment/decrement). - Add a llarp::UDPHandle abstract class for UDP handling - Removes the UDP tick handler; code that needs tick can just do a separate handler on the event loop outside the UDP socket. - Adds an "OwnedBuffer" which owns its own memory but is implicitly convertible to a llarp_buffer_t. This is mostly needed to take over ownership of buffers from uvw without copying them as, currently, uvw does its own allocation (pending some open upstream issues/PRs). - Logic: - add `make_caller`/`call_forever`/`call_every` utility functions to abstract Call wrapping and dependent timed tasks. - Add inLogicThread() so that code can tell its inside the logic thread (typically for debugging assertions). - get rid of janky integer returns and dealing with cancellations on call_later: the other methods added here and the event loop code remove the need for them. - Event loop: - redo everything with uvw instead of libuv - rename EventLoopWakeup::Wakeup to EventLoopWakeup::Trigger to better reflect what it does. - add EventLoopRepeater for repeated events, and replace the code that reschedules itself every time it is called with a repeater. - Split up `EventLoop::run()` into a non-virtual base method and abstract `run_loop()` methods; the base method does a couple extra setup/teardown things that don't need to be in the derived class. - udp_listen is replaced with ev->udp(...) which returns a new UDPHandle object rather that needing gross C-style-but-not-actually-C-compatible structs. - Remove unused register_poll_fd_(un)readable - Use shared_ptr for EventLoopWakeup rather than returning a raw pointer; uvw lets us not have to worry about having the event loop class maintain ownership of it. - Add factory EventLoop::create() function to create a default (uvw-based) event loop (previously this was one of the llarp_ev_blahblah unnamespaced functions). - ev_libuv: this is mostly rewritten; all of the glue code/structs, in particular, are gone as they are no longer needed with uvw. - DNS: - Rename DnsHandler to DnsInterceptor to better describe what it does (this is the code that intercepts all DNS to the tun IP range for Android). - endpoint: - remove unused "isolated network" code - remove distinct (but actually always the same) variables for router/endpoint logic objects - llarp_buffer_t - make constructors type-safe against being called with points to non-size-1 values - tun packet reading: - read all available packets off the device/file descriptor; previously we were reading one packet at a time then returning to the event loop to poll again. - ReadNextPacket() now returns a 0-size packet if the read would block (so that we can implement the previous point). - ReadNextPacket() now throws on I/O error - Miscellaneous code cleanups/simplifications
2021-03-02 02:06:20 +00:00
{
return Router()->loop();
Replace libuv with uvw & related refactoring - removes all the llarp_ev_* functions, replacing with methods/classes/functions in the llarp namespace. - banish ev/ev.h to the void - Passes various things by const lvalue ref, especially shared_ptr's that don't need to be copied (to avoid an atomic refcount increment/decrement). - Add a llarp::UDPHandle abstract class for UDP handling - Removes the UDP tick handler; code that needs tick can just do a separate handler on the event loop outside the UDP socket. - Adds an "OwnedBuffer" which owns its own memory but is implicitly convertible to a llarp_buffer_t. This is mostly needed to take over ownership of buffers from uvw without copying them as, currently, uvw does its own allocation (pending some open upstream issues/PRs). - Logic: - add `make_caller`/`call_forever`/`call_every` utility functions to abstract Call wrapping and dependent timed tasks. - Add inLogicThread() so that code can tell its inside the logic thread (typically for debugging assertions). - get rid of janky integer returns and dealing with cancellations on call_later: the other methods added here and the event loop code remove the need for them. - Event loop: - redo everything with uvw instead of libuv - rename EventLoopWakeup::Wakeup to EventLoopWakeup::Trigger to better reflect what it does. - add EventLoopRepeater for repeated events, and replace the code that reschedules itself every time it is called with a repeater. - Split up `EventLoop::run()` into a non-virtual base method and abstract `run_loop()` methods; the base method does a couple extra setup/teardown things that don't need to be in the derived class. - udp_listen is replaced with ev->udp(...) which returns a new UDPHandle object rather that needing gross C-style-but-not-actually-C-compatible structs. - Remove unused register_poll_fd_(un)readable - Use shared_ptr for EventLoopWakeup rather than returning a raw pointer; uvw lets us not have to worry about having the event loop class maintain ownership of it. - Add factory EventLoop::create() function to create a default (uvw-based) event loop (previously this was one of the llarp_ev_blahblah unnamespaced functions). - ev_libuv: this is mostly rewritten; all of the glue code/structs, in particular, are gone as they are no longer needed with uvw. - DNS: - Rename DnsHandler to DnsInterceptor to better describe what it does (this is the code that intercepts all DNS to the tun IP range for Android). - endpoint: - remove unused "isolated network" code - remove distinct (but actually always the same) variables for router/endpoint logic objects - llarp_buffer_t - make constructors type-safe against being called with points to non-size-1 values - tun packet reading: - read all available packets off the device/file descriptor; previously we were reading one packet at a time then returning to the event loop to poll again. - ReadNextPacket() now returns a 0-size packet if the read would block (so that we can implement the previous point). - ReadNextPacket() now throws on I/O error - Miscellaneous code cleanups/simplifications
2021-03-02 02:06:20 +00:00
}
2020-05-21 14:18:23 +00:00
void
Endpoint::BlacklistSNode(const RouterID snode)
{
m_state->m_SnodeBlacklist.insert(snode);
}
const std::set<RouterID>&
Endpoint::SnodeBlacklist() const
{
return m_state->m_SnodeBlacklist;
}
const IntroSet&
Endpoint::introSet() const
{
return m_state->m_IntroSet;
}
IntroSet&
Endpoint::introSet()
{
return m_state->m_IntroSet;
}
const ConvoMap&
Endpoint::Sessions() const
{
return m_state->m_Sessions;
}
ConvoMap&
Endpoint::Sessions()
{
return m_state->m_Sessions;
}
void
Endpoint::SetAuthInfoForEndpoint(Address addr, AuthInfo info)
{
m_RemoteAuthInfos[addr] = std::move(info);
}
void
Endpoint::MapExitRange(IPRange range, Address exit)
{
if (not exit.IsZero())
LogInfo(Name(), " map ", range, " to exit at ", exit);
m_ExitMap.Insert(range, exit);
}
bool
Endpoint::HasFlowToService(Address addr) const
{
return HasOutboundConvo(addr) or HasInboundConvo(addr);
}
void
Endpoint::UnmapExitRange(IPRange range)
{
// unmap all ranges that fit in the range we gave
m_ExitMap.RemoveIf([&](const auto& item) -> bool {
if (not range.Contains(item.first))
return false;
LogInfo(Name(), " unmap ", item.first, " exit range mapping");
return true;
});
}
std::optional<AuthInfo>
Endpoint::MaybeGetAuthInfoForEndpoint(Address remote)
{
const auto itr = m_RemoteAuthInfos.find(remote);
if (itr == m_RemoteAuthInfos.end())
return std::nullopt;
return itr->second;
}
QUIC lokinet integration refactor Refactors how quic packets get handled: the actual tunnels now live in tunnel.hpp's TunnelManager which holds and manages all the quic<->tcp tunnelling. service::Endpoint now holds a TunnelManager rather than a quic::Server. We only need one quic server, but we need a separate quic client instance per outgoing quic tunnel, and TunnelManager handles all that glue now. Adds QUIC packet handling to get to the right tunnel code. This required multiplexing incoming quic packets, as follows: Adds a very small quic tunnel packet header of 4 bytes: [1, SPORT, ECN] for client->server packets, where SPORT is our source "port" (really: just a uint16_t unique quic instance identifier) or [2, DPORT, ECN] for server->client packets where the DPORT is the SPORT from above. (This also reworks ECN bits to get properly carried over lokinet.) We don't need a destination/source port for the server-side because there is only ever one quic server (and we know we're going to it when the first byte of the header is 1). Removes the config option for quic exposing ports; a full lokinet will simply accept anything incoming on quic and tunnel it to the requested port on the the local endpoint IP (this handler will come in a following commit). Replace ConvoTags with full addresses: we need to carry the port, as well, which the ConvoTag can't give us, so change those to more general SockAddrs from which we can extract both the ConvoTag *and* the port. Add a pending connection queue along with new quic-side handlers to call when a stream becomes available (TunnelManager uses this to wire up pending incoming conns with quic streams as streams open up). Completely get rid of tunnel_server/tunnel_client.cpp code; it is now moved to tunnel.hpp. Add listen()/forget() methods in TunnelManager for setting up quic listening sockets (for liblokinet usage). Add open()/close() methods in TunnelManager for spinning up new quic clients for outgoing quic connections.
2021-03-23 19:26:32 +00:00
quic::TunnelManager*
Endpoint::GetQUICTunnel()
{
return m_quic.get();
}
2018-07-12 18:21:44 +00:00
} // namespace service
2018-07-16 03:32:13 +00:00
} // namespace llarp