Go to file
2022-12-16 07:42:31 -08:00
.github/workflows unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
docs Harrison/new version (#362) 2022-12-16 07:42:31 -08:00
langchain Harrison/new version (#362) 2022-12-16 07:42:31 -08:00
tests Harrison/map reduce merge (#344) 2022-12-15 17:49:14 -08:00
.coveragerc unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
.flake8 initial commit 2022-10-24 14:51:15 -07:00
.gitignore add .idea files to gitignore, add zsh note to installation docs (#329) 2022-12-13 05:20:22 -08:00
CONTRIBUTING.md unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
LICENSE add license (#50) 2022-11-01 21:12:02 -07:00
Makefile unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
poetry.lock add openai tokenizer (#355) 2022-12-15 22:35:42 -08:00
poetry.toml chore: use poetry as dependency manager (#242) 2022-12-03 16:42:59 -08:00
pyproject.toml Harrison/new version (#362) 2022-12-16 07:42:31 -08:00
README.md Harrison/llm final stuff (#332) 2022-12-13 07:50:46 -08:00
readthedocs.yml Bumping python version for read the docs (#122) 2022-11-12 13:43:39 -08:00

🦜🔗 LangChain

Building applications with LLMs through composability

lint test License: MIT Twitter

Quick Install

pip install langchain

🤔 What is this?

Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.

This library is aimed at assisting in the development of those types of applications.

📖 Documentation

Please see here for full documentation on:

  • Getting started (installation, setting up the environment, simple examples)
  • How-To examples (demos, integrations, helper functions)
  • Reference (full API docs) Resources (high-level explanation of core concepts)

🚀 What can this help with?

There are four main areas that LangChain is designed to help with. These are, in increasing order of complexity:

📃 LLMs and Prompts:

This includes prompt management, prompt optimization, generic interface for all LLMs, and common utilities for working with LLMs.

🔗 Chains:

Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.

🤖 Agents:

Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.

🧠 Memory:

Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.

For more information on these concepts, please see our full documentation.

💁 Contributing

As an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.

For detailed information on how to contribute, see here.