Adds [DeepSparse](https://github.com/neuralmagic/deepsparse) as an LLM
backend. DeepSparse supports running various open-source sparsified
models hosted on [SparseZoo](https://sparsezoo.neuralmagic.com/) for
performance gains on CPUs.
Twitter handles: @mgoin_ @neuralmagic
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
# Added SmartGPT workflow by providing SmartLLM wrapper around LLMs
Edit:
As @hwchase17 suggested, this should be a chain, not an LLM. I have
adapted the PR.
It is used like this:
```
from langchain.prompts import PromptTemplate
from langchain.chains import SmartLLMChain
from langchain.chat_models import ChatOpenAI
hard_question = "I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?"
hard_question_prompt = PromptTemplate.from_template(hard_question)
llm = ChatOpenAI(model_name="gpt-4")
prompt = PromptTemplate.from_template(hard_question)
chain = SmartLLMChain(llm=llm, prompt=prompt, verbose=True)
chain.run({})
```
Original text:
Added SmartLLM wrapper around LLMs to allow for SmartGPT workflow (as in
https://youtu.be/wVzuvf9D9BU). SmartLLM can be used wherever LLM can be
used. E.g:
```
smart_llm = SmartLLM(llm=OpenAI())
smart_llm("What would be a good company name for a company that makes colorful socks?")
```
or
```
smart_llm = SmartLLM(llm=OpenAI())
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=smart_llm, prompt=prompt)
chain.run("colorful socks")
```
SmartGPT consists of 3 steps:
1. Ideate - generate n possible solutions ("ideas") to user prompt
2. Critique - find flaws in every idea & select best one
3. Resolve - improve upon best idea & return it
Fixes#4463
## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
- @hwchase17
- @agola11
Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit adds the LangChain utility which allows for the real-time
retrieval of cryptocurrency exchange prices. With LangChain, users can
easily access up-to-date pricing information by running the command
".run(from_currency, to_currency)". This new feature provides a
convenient way to stay informed on the latest exchange rates and make
informed decisions when trading crypto.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Adds the ArcGISLoader class to
`langchain.document_loaders`
- Allows users to load data from ArcGIS Online, Portal, and similar
- Users can authenticate with `arcgis.gis.GIS` or retrieve public data
anonymously
- Uses the `arcgis.features.FeatureLayer` class to retrieve the data
- Defines the most relevant keywords arguments and accepts `**kwargs`
- Dependencies: Using this class requires `arcgis` and, optionally,
`bs4.BeautifulSoup`.
Tagging maintainers:
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Added a new use case category called "Web Scraping", and
a tutorial to scrape websites using OpenAI Functions Extraction chain to
the docs.
- Tag maintainer:@baskaryan @hwchase17 ,
- Twitter handle: https://www.linkedin.com/in/haiphunghiem/ (I'm on
LinkedIn mostly)
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
This PR introduces [Label Studio](https://labelstud.io/) integration
with LangChain via `LabelStudioCallbackHandler`:
- sending data to the Label Studio instance
- labeling dataset for supervised LLM finetuning
- rating model responses
- tracking and displaying chat history
- support for custom data labeling workflow
### Example
```
chat_llm = ChatOpenAI(callbacks=[LabelStudioCallbackHandler(mode="chat")])
chat_llm([
SystemMessage(content="Always use emojis in your responses."),
HumanMessage(content="Hey AI, how's your day going?"),
AIMessage(content="🤖 I don't have feelings, but I'm running smoothly! How can I help you today?"),
HumanMessage(content="I'm feeling a bit down. Any advice?"),
AIMessage(content="🤗 I'm sorry to hear that. Remember, it's okay to seek help or talk to someone if you need to. 💬"),
HumanMessage(content="Can you tell me a joke to lighten the mood?"),
AIMessage(content="Of course! 🎭 Why did the scarecrow win an award? Because he was outstanding in his field! 🌾"),
HumanMessage(content="Haha, that was a good one! Thanks for cheering me up."),
AIMessage(content="Always here to help! 😊 If you need anything else, just let me know."),
HumanMessage(content="Will do! By the way, can you recommend a good movie?"),
])
```
<img width="906" alt="image"
src="https://github.com/langchain-ai/langchain/assets/6087484/0a1cf559-0bd3-4250-ad96-6e71dbb1d2f3">
### Dependencies
- [label-studio](https://pypi.org/project/label-studio/)
- [label-studio-sdk](https://pypi.org/project/label-studio-sdk/)
https://twitter.com/labelstudiohq
---------
Co-authored-by: nik <nik@heartex.net>
As of the recent PR at #9043, after some testing we've realised that the
default values were not being used for `api_key` and `api_url`. Besides
that, the default for `api_key` was set to `argilla.apikey`, but since
the default values are intended for people using the Argilla Quickstart
(easy to run and setup), the defaults should be instead `owner.apikey`
if using Argilla 1.11.0 or higher, or `admin.apikey` if using a lower
version of Argilla.
Additionally, we've removed the f-string replacements from the
docstrings.
---------
Co-authored-by: Gabriel Martin <gabriel@argilla.io>
In second section it looks like a copy/paste from the first section and
doesn't include the specific embedding model mentioned in the example so
I added it for clarity.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
The table creation process in these examples commands do not match what
the recently updated functions in these example commands is looking for.
This change updates the type in the table creation command.
Issue Number for my report of the doc problem #7446
@rlancemartin and @eyurtsev I believe this is your area
Twitter: @j1philli
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description**: [BagelDB](bageldb.ai) a collaborative vector
database. Integrated the bageldb PyPi package with langchain with
related tests and code.
- **Issue**: Not applicable.
- **Dependencies**: `betabageldb` PyPi package.
- **Tag maintainer**: @rlancemartin, @eyurtsev, @baskaryan
- **Twitter handle**: bageldb_ai (https://twitter.com/BagelDB_ai)
We ran `make format`, `make lint` and `make test` locally.
Followed the contribution guideline thoroughly
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
---------
Co-authored-by: Towhid1 <nurulaktertowhid@gmail.com>
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.
### Maintainers
@agola11
@aarora79
### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.
Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)
#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.
- [x] Make Lint
- [x] Make Format
- [x] Make Test
#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.
Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan
This PR adds the ability to temporarily cache or persistently store
embeddings.
A notebook has been included showing how to set up the cache and how to
use it with a vectorstore.
- Description: Improvement in the Grobid loader documentation, typos and
suggesting to use the docker image instead of installing Grobid in local
(the documentation was also limited to Mac, while docker allow running
in any platform)
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @whitenoise
This pull request aims to ensure that the `OpenAICallbackHandler` can
properly calculate the total cost for Azure OpenAI chat models. The
following changes have resolved this issue:
- The `model_name` has been added to the ChatResult llm_output. Without
this, the default values of `gpt-35-turbo` were applied. This was
causing the total cost for Azure OpenAI's GPT-4 to be significantly
inaccurate.
- A new parameter `model_version` has been added to `AzureChatOpenAI`.
Azure does not include the model version in the response. With the
addition of `model_name`, this is not a significant issue for GPT-4
models, but it's an issue for GPT-3.5-Turbo. Version 0301 (default) of
GPT-3.5-Turbo on Azure has a flat rate of 0.002 per 1k tokens for both
prompt and completion. However, version 0613 introduced a split in
pricing for prompt and completion tokens.
- The `OpenAICallbackHandler` implementation has been updated with the
proper model names, versions, and cost per 1k tokens.
Unit tests have been added to ensure the functionality works as
expected; the Azure ChatOpenAI notebook has been updated with examples.
Maintainers: @hwchase17, @baskaryan
Twitter handle: @jjczopek
---------
Co-authored-by: Jerzy Czopek <jerzy.czopek@avanade.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Instruction for integration with Log10: an [open
source](https://github.com/log10-io/log10) proxiless LLM data management
and application development platform that lets you log, debug and tag
your Langchain calls
- Tag maintainer: @baskaryan
- Twitter handle: @log10io @coffeephoenix
Several examples showing the integration included
[here](https://github.com/log10-io/log10/tree/main/examples/logging) and
in the PR
Description: Adds Rockset as a chat history store
Dependencies: no changes
Tag maintainer: @hwchase17
This PR passes linting and testing.
I added a test for the integration and an example notebook showing its
use.
This PR adds 8 new loaders:
* `AirbyteCDKLoader` This reader can wrap and run all python-based
Airbyte source connectors.
* Separate loaders for the most commonly used APIs:
* `AirbyteGongLoader`
* `AirbyteHubspotLoader`
* `AirbyteSalesforceLoader`
* `AirbyteShopifyLoader`
* `AirbyteStripeLoader`
* `AirbyteTypeformLoader`
* `AirbyteZendeskSupportLoader`
## Documentation and getting started
I added the basic shape of the config to the notebooks. This increases
the maintenance effort a bit, but I think it's worth it to make sure
people can get started quickly with these important connectors. This is
also why I linked the spec and the documentation page in the readme as
these two contain all the information to configure a source correctly
(e.g. it won't suggest using oauth if that's avoidable even if the
connector supports it).
## Document generation
The "documents" produced by these loaders won't have a text part
(instead, all the record fields are put into the metadata). If a text is
required by the use case, the caller needs to do custom transformation
suitable for their use case.
## Incremental sync
All loaders support incremental syncs if the underlying streams support
it. By storing the `last_state` from the reader instance away and
passing it in when loading, it will only load updated records.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This addresses some issues with introducing the Nebula LLM to LangChain
in this PR:
https://github.com/langchain-ai/langchain/pull/8876
This fixes the following:
- Removes `SYMBLAI` from variable names
- Fixes bug with `Bearer` for the API KEY
Thanks again in advance for your help!
cc: @hwchase17, @baskaryan
---------
Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
Minor doc fix to awslambda tool notebook.
Add missing import for initialize_agent to awslambda agent example
Co-authored-by: Josh Hart <josharj@amazon.com>
Description:
Fixed inaccurate import in integrations:providers:bedrock documentation
In the current version of the bedrock documentation, page
https://python.langchain.com/docs/integrations/providers/bedrock it
states that the import is from langchain import Bedrock
This has been changed to from langchain.llms.bedrock import Bedrock as
stated in https://python.langchain.com/docs/integrations/llms/bedrock
Issue:
Not applicable
Dependencies
No dependencies required
Tag maintainer
@baskaryan
Twitter handle:
Not applicable
Adds Ollama as an LLM. Ollama can run various open source models locally
e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing
them.
@rlancemartin @hwchase17
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
## Description
I am excited to propose an integration with USearch, a lightweight
vector-search engine available for both Python and JavaScript, among
other languages.
## Dependencies
It introduces a new PyPi dependency - `usearch`. I am unsure if it must
be added to the Poetry file, as this would make the PR too clunky.
Please let me know.
## Profiles
- Maintainers: @ashvardanian @davvard
- Twitter handles: @ashvardanian @unum_cloud
---------
Co-authored-by: Davit Vardanyan <78792753+davvard@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- fix install command
- change example notebook to use Metaphor autoprompt by default
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->