Go to file
2022-12-19 09:15:32 -05:00
.github/workflows unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
docs fix agent memory docs (#382) 2022-12-19 09:15:32 -05:00
langchain Harrison/tools exp (#372) 2022-12-18 21:51:23 -05:00
tests Harrison/tools exp (#372) 2022-12-18 21:51:23 -05:00
.coveragerc unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
.flake8 change run to use args and kwargs (#367) 2022-12-18 15:54:56 -05:00
.gitignore add .idea files to gitignore, add zsh note to installation docs (#329) 2022-12-13 05:20:22 -08:00
CONTRIBUTING.md unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
LICENSE add license (#50) 2022-11-01 21:12:02 -07:00
Makefile unit test / code coverage improvements (#322) 2022-12-13 05:48:53 -08:00
poetry.lock Add HuggingFacePipeline LLM (#353) 2022-12-17 07:00:04 -08:00
poetry.toml chore: use poetry as dependency manager (#242) 2022-12-03 16:42:59 -08:00
pyproject.toml upgrade version to 0041 (#378) 2022-12-18 22:33:03 -05:00
README.md Harrison/llm final stuff (#332) 2022-12-13 07:50:46 -08:00
readthedocs.yml Bumping python version for read the docs (#122) 2022-11-12 13:43:39 -08:00

🦜🔗 LangChain

Building applications with LLMs through composability

lint test License: MIT Twitter

Quick Install

pip install langchain

🤔 What is this?

Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.

This library is aimed at assisting in the development of those types of applications.

📖 Documentation

Please see here for full documentation on:

  • Getting started (installation, setting up the environment, simple examples)
  • How-To examples (demos, integrations, helper functions)
  • Reference (full API docs) Resources (high-level explanation of core concepts)

🚀 What can this help with?

There are four main areas that LangChain is designed to help with. These are, in increasing order of complexity:

📃 LLMs and Prompts:

This includes prompt management, prompt optimization, generic interface for all LLMs, and common utilities for working with LLMs.

🔗 Chains:

Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.

🤖 Agents:

Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.

🧠 Memory:

Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.

For more information on these concepts, please see our full documentation.

💁 Contributing

As an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.

For detailed information on how to contribute, see here.