This PR makes both clients and servers work on macOS. Specifically, it:
- Follows https://github.com/learning-at-home/hivemind/pull/586 to run a macOS-compatible `p2pd` binary (both x86-64 and ARM64 are supported)
- Fixes forking issues and tests on macOS, Python 3.10+
- Introduces basic support for serving model blocks on Apple M1/M2 GPUs (torch.mps)
- Increases max number of open files by default (it's not enough on Linux and is really small on macOS)
- rpc_inference: server will now accept allocation timeout from user, defaults to no timeout
- bugfix: inference timeout is now measured from the moment the request is received
- previously, you would have to wait for your timeout plus the time it takes to sort through the queue (other users' timeout)
- now, you get AllocationFailed if you had to wait for over (timeout) seconds - regardless of other users
- a request for inference with no timeout will now fail instantly if there is not enough memory available
- dtype number of bytes is now correctly determined for int, bool & other types
---------
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>