forked from Archives/langchain
You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
bc7e56e8df
Supporting asyncio in langchain primitives allows for users to run them concurrently and creates more seamless integration with asyncio-supported frameworks (FastAPI, etc.) Summary of changes: **LLM** * Add `agenerate` and `_agenerate` * Implement in OpenAI by leveraging `client.Completions.acreate` **Chain** * Add `arun`, `acall`, `_acall` * Implement them in `LLMChain` and `LLMMathChain` for now **Agent** * Refactor and leverage async chain and llm methods * Add ability for `Tools` to contain async coroutine * Implement async SerpaPI `arun` Create demo notebook. Open questions: * Should all the async stuff go in separate classes? I've seen both patterns (keeping the same class and having async and sync methods vs. having class separation) |
1 year ago | |
---|---|---|
.. | ||
api | 1 year ago | |
chat_vector_db | 1 year ago | |
combine_documents | 1 year ago | |
conversation | 1 year ago | |
hyde | 1 year ago | |
llm_bash | 1 year ago | |
llm_checker | 1 year ago | |
llm_math | 1 year ago | |
natbot | 1 year ago | |
pal | 1 year ago | |
qa_with_sources | 1 year ago | |
question_answering | 1 year ago | |
sql_database | 1 year ago | |
summarize | 1 year ago | |
vector_db_qa | 1 year ago | |
__init__.py | 1 year ago | |
base.py | 1 year ago | |
llm.py | 1 year ago | |
llm_requests.py | 1 year ago | |
loading.py | 1 year ago | |
mapreduce.py | 1 year ago | |
moderation.py | 1 year ago | |
sequential.py | 1 year ago | |
transform.py | 2 years ago |