langchain/libs/partners/together
Vincent Chen 833d61adb3
docs: update Together README.md (#18004)
## PR message
**Description:** This PR adds a README file for the Together API in the
`libs/partners` folder of this repository. The README includes:
 - A brief description of the package
 - Installation instructions and class introductions
 - Simple usage examples

**Issue:** #17545 

This PR only contains document changes.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 00:02:32 +00:00
..
langchain_together docs: update docstring of Together class (#19008) 2024-03-15 16:30:45 -07:00
scripts infra: add print rule to ruff (#16221) 2024-02-09 16:13:30 -08:00
tests infra: add print rule to ruff (#16221) 2024-02-09 16:13:30 -08:00
.gitignore together: package and embedding model (#14936) 2023-12-19 18:48:32 -08:00
LICENSE together: package and embedding model (#14936) 2023-12-19 18:48:32 -08:00
Makefile together: package and embedding model (#14936) 2023-12-19 18:48:32 -08:00
poetry.lock partners: version constraints (#17492) 2024-02-14 08:57:46 -08:00
pyproject.toml partners: version constraints (#17492) 2024-02-14 08:57:46 -08:00
README.md docs: update Together README.md (#18004) 2024-03-29 00:02:32 +00:00

langchain-together

This package contains the LangChain integration for Together's generative models.

Installation

pip install -U langchain-together

Embeddings

You can use Together's embedding models through TogetherEmbeddings class.

from langchain_together import TogetherEmbeddings

embeddings = TogetherEmbeddings(
    model='togethercomputer/m2-bert-80M-8k-retrieval'
)
embeddings.embed_query("What is a large language model?")

LLMs

You can use Together's generative AI models as Langchain LLMs:

from langchain_together import Together
from langchain_core.prompts import PromptTemplate

llm = Together(
    model="togethercomputer/RedPajama-INCITE-7B-Base",
    temperature=0.7,
    max_tokens=64,
    top_k=1,
    # together_api_key="..."
)

template = """Question: {question}
Answer: """
prompt = PromptTemplate.from_template(template)

chain = prompt | llm

question = "Who was the president in the year Justin Beiber was born?"
print(chain.invoke({"question": question}))