langchain/docs/extras
Blake (Yung Cher Ho) 8d351bfc20
Takeoff integration (#9045)
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.

Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.

Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)

#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.

- [x] Make Lint
- [x] Make Format
- [x] Make Test

#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.

Thanks for your help and please let me know if you have any questions.

cc: @hwchase17 @baskaryan
2023-08-10 10:56:06 -07:00
..
_templates Update Integrations links (#8206) 2023-07-24 21:20:32 -07:00
additional_resources Link to use cases from tutorials (#8371) 2023-07-27 11:54:04 -07:00
ecosystem use top nav docs (#8090) 2023-07-21 13:52:03 -07:00
guides document lcel fallbacks (#8942) 2023-08-08 18:49:33 -07:00
integrations Takeoff integration (#9045) 2023-08-10 10:56:06 -07:00
modules Add embeddings cache (#8976) 2023-08-10 11:15:30 -04:00
use_cases API use case (#8546) 2023-08-10 07:52:54 -07:00