merge from upstream

pull/10242/head
olgavrou 1 year ago
commit 00d56fb0fc

@ -27,7 +27,7 @@ runs:
using: composite
steps:
- uses: actions/setup-python@v4
name: Setup python $${ inputs.python-version }}
name: Setup python ${{ inputs.python-version }}
with:
python-version: ${{ inputs.python-version }}

@ -338,6 +338,7 @@
"Neptune Open Cypher QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/neptune_cypher_qa",
"NebulaGraphQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_nebula_qa",
"KuzuQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_kuzu_qa",
"FalkorDBQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_falkordb_qa",
"HugeGraph QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_hugegraph_qa",
"GraphSparqlQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_sparql_qa",
"ArangoDB QA chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_arangodb_qa",
@ -3174,6 +3175,12 @@
"KuzuQAChain": {
"KuzuQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_kuzu_qa"
},
"FalkorDBGraph": {
"KuzuQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_falkordb_qa"
},
"FalkorDBQAChain": {
"FalkorDB QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_falkordb_qa"
},
"HugeGraphQAChain": {
"HugeGraph QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_hugegraph_qa"
},

@ -5,9 +5,10 @@
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="Refresh" content="0; url={{ redirect }}" />
<meta name="Description" content="scikit-learn: machine learning in Python">
<meta name="robots" content="follow, index">
<meta name="Description" content="Python API reference for LangChain.">
<link rel="canonical" href="{{ redirect }}" />
<title>scikit-learn: machine learning in Python</title>
<title>LangChain Python API Reference Documentation.</title>
</head>
<body>
<p>You will be automatically redirected to the <a href="{{ redirect }}">new location of this page</a>.</p>

@ -0,0 +1,14 @@
---
sidebar_class_name: hidden
---
# LangChain Expression Language (LCEL)
LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
Any chain constructed this way will automatically have full sync, async, and streaming support.
#### [Interface](/docs/expression_language/interface)
The base interface shared by all LCEL objects
#### [Cookbook](/docs/expression_language/cookbook)
Examples of common LCEL usage patterns

@ -51,7 +51,7 @@ Walkthroughs and best-practices for common end-to-end use cases, like:
Learn best practices for developing with LangChain.
### [Ecosystem](/docs/ecosystem/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/ecosystem/dependents).
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/additional_resources/dependents).
### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).

@ -59,8 +59,8 @@ LangChain provides several objects to easily distinguish between different roles
If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
For more information on how to use these different messages most effectively, see our prompting guide.
LangChain exposes a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
The standard interface that LangChain exposes has two methods:
LangChain provides a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
The standard interface that LangChain provides has two methods:
- `predict`: Takes in a string, returns a string
- `predict_messages`: Takes in a list of messages, returns a message.

@ -1,9 +0,0 @@
# LangChain Expression Language
import DocCardList from "@theme/DocCardList";
LangChain Expression Language is a declarative way to easily compose chains together.
Any chain constructed this way will automatically have full sync, async, and streaming support.
See guides below for how to interact with chains constructed this way as well as cookbook examples.
<DocCardList />

@ -2,11 +2,21 @@
import DocCardList from "@theme/DocCardList";
LangSmith helps you trace and evaluate your language model applications and intelligent agents to help you
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you
move from prototype to production.
Check out the [interactive walkthrough](/docs/guides/langsmith/walkthrough) below to get started.
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/)
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).
For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow,
check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include:
- Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)).
- Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)).
- How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)).
- How to fine-tune a LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).
- How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
<DocCardList />

@ -4,4 +4,5 @@ One of the key concerns with using LLMs is that they may generate harmful or une
- [Moderation chain](/docs/guides/safety/moderation): Explicitly check if any output text is harmful and flag it.
- [Constitutional chain](/docs/guides/safety/constitutional_chain): Prompt the model with a set of principles which should guide it's behavior.
- [Logical Fallacy chain](/docs/guides/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation.
- [Amazon Comprehend moderation chain](/docs/guides/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle PII and toxicity.

@ -0,0 +1,85 @@
# Removing logical fallacies from model output
Logical fallacies are flawed reasoning or false arguments that can undermine the validity of a model's outputs. Examples include circular reasoning, false
dichotomies, ad hominem attacks, etc. Machine learning models are optimized to perform well on specific metrics like accuracy, perplexity, or loss. However,
optimizing for metrics alone does not guarantee logically sound reasoning.
Language models can learn to exploit flaws in reasoning to generate plausible-sounding but logically invalid arguments. When models rely on fallacies, their outputs become unreliable and untrustworthy, even if they achieve high scores on metrics. Users cannot depend on such outputs. Propagating logical fallacies can spread misinformation, confuse users, and lead to harmful real-world consequences when models are deployed in products or services.
Monitoring and testing specifically for logical flaws is challenging unlike other quality issues. It requires reasoning about arguments rather than pattern matching.
Therefore, it is crucial that model developers proactively address logical fallacies after optimizing metrics. Specialized techniques like causal modeling, robustness testing, and bias mitigation can help avoid flawed reasoning. Overall, allowing logical flaws to persist makes models less safe and ethical. Eliminating fallacies ensures model outputs remain logically valid and aligned with human reasoning. This maintains user trust and mitigates risks.
```python
# Imports
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains.llm import LLMChain
from langchain_experimental.fallacy_removal.base import FallacyChain
```
```python
# Example of a model output being returned with a logical fallacy
misleading_prompt = PromptTemplate(
template="""You have to respond by using only logical fallacies inherent in your answer explanations.
Question: {question}
Bad answer:""",
input_variables=["question"],
)
llm = OpenAI(temperature=0)
misleading_chain = LLMChain(llm=llm, prompt=misleading_prompt)
misleading_chain.run(question="How do I know the earth is round?")
```
<CodeOutputBlock lang="python">
```
'The earth is round because my professor said it is, and everyone believes my professor'
```
</CodeOutputBlock>
```python
fallacies = FallacyChain.get_fallacies(["correction"])
fallacy_chain = FallacyChain.from_llm(
chain=misleading_chain,
logical_fallacies=fallacies,
llm=llm,
verbose=True,
)
fallacy_chain.run(question="How do I know the earth is round?")
```
<CodeOutputBlock lang="python">
```
> Entering new FallacyChain chain...
Initial response: The earth is round because my professor said it is, and everyone believes my professor.
Applying correction...
Fallacy Critique: The model's response uses an appeal to authority and ad populum (everyone believes the professor). Fallacy Critique Needed.
Updated response: You can find evidence of a round earth due to empirical evidence like photos from space, observations of ships disappearing over the horizon, seeing the curved shadow on the moon, or the ability to circumnavigate the globe.
> Finished chain.
'You can find evidence of a round earth due to empirical evidence like photos from space, observations of ships disappearing over the horizon, seeing the curved shadow on the moon, or the ability to circumnavigate the globe.'
```
</CodeOutputBlock>

@ -37,11 +37,11 @@ This agent is designed to be used in conversational settings.
The prompt is designed to make the agent helpful and conversational.
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.
### [Self ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
### [Self-ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
This agent utilizes a single tool that should be named `Intermediate Answer`.
This tool should be able to lookup factual answers to questions. This agent
is equivalent to the original [self ask with search paper](https://ofir.io/self-ask.pdf),
is equivalent to the original [self-ask with search paper](https://ofir.io/self-ask.pdf),
where a Google search API was provided as the tool.
### [ReAct document store](/docs/modules/agents/agent_types/react_docstore.html)
@ -54,4 +54,4 @@ This agent is equivalent to the
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
## [Plan-and-execute agents](/docs/modules/agents/agent_types/plan_and_execute.html)
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).

@ -1,6 +1,6 @@
# Plan and execute
# Plan-and-execute
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
The planning is almost always done by an LLM.

@ -1,13 +1,13 @@
# Custom LLM Agent
# Custom LLM agent
This notebook goes through how to create your own custom LLM agent.
An LLM agent consists of three parts:
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
- LLM: This is the language model that powers the agent
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
import Example from "@snippets/modules/agents/how_to/custom_llm_agent.mdx"

@ -4,10 +4,10 @@ This notebook goes through how to create your own custom agent based on a chat m
An LLM chat agent consists of three parts:
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
- ChatModel: This is the language model that powers the agent
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
- `ChatModel`: This is the language model that powers the agent
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
import Example from "@snippets/modules/agents/how_to/custom_llm_chat_agent.mdx"

@ -3,7 +3,7 @@ sidebar_position: 2
---
# Documents
These are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These chains all implement a common interface:

@ -3,10 +3,10 @@ sidebar_position: 1
---
# Refine
The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
The Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context.
The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain.
There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
![refine_diagram](/img/refine.jpg)
![refine_diagram](/img/refine.jpg)

@ -1,11 +1,11 @@
# LLM
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
## Get started
import Example from "@snippets/modules/chains/foundational/llm_chain.mdx"
<Example/>
<Example/>

@ -4,7 +4,7 @@
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario.. There are two types of sequential chains:
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:
- `SimpleSequentialChain`: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
- `SequentialChain`: A more general form of sequential chains, allowing for multiple inputs/outputs.

@ -30,4 +30,4 @@ Chains allow us to combine multiple components together to create a single, cohe
import GetStarted from "@snippets/modules/chains/get_started.mdx"
<GetStarted/>
<GetStarted/>

@ -11,7 +11,7 @@ Use document loaders to load data from a source as `Document`'s. A `Document` is
and associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.
Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
Document loaders provide a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.
## Get started

@ -2,8 +2,8 @@
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.
1. How the text is split: by single character
2. How the chunk size is measured: by number of characters
1. How the text is split: by single character.
2. How the chunk size is measured: by number of characters.
import Example from "@snippets/modules/data_connection/document_transformers/text_splitters/character_text_splitter.mdx"

@ -1,6 +1,6 @@
# Split code
CodeTextSplitter allows you to split your code with multiple language support. Import enum `Language` and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum `Language` and specify the language.
import Example from "@snippets/modules/data_connection/document_transformers/text_splitters/code_splitter.mdx"

@ -2,8 +2,8 @@
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `["\n\n", "\n", " ", ""]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
1. How the text is split: by list of characters
2. How the chunk size is measured: by number of characters
1. How the text is split: by list of characters.
2. How the chunk size is measured: by number of characters.
import Example from "@snippets/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter.mdx"

@ -18,9 +18,9 @@ This encompasses several key modules.
**[Document loaders](/docs/modules/data_connection/document_loaders/)**
Load documents from many different sources.
LangChain provides over a 100 different document loaders as well as integrations with other major providers in the space,
LangChain provides over 100 different document loaders as well as integrations with other major providers in the space,
like AirByte and Unstructured.
We provide integrations to load all types of documents (html, PDF, code) from all types of locations (private s3 buckets, public websites).
We provide integrations to load all types of documents (HTML, PDF, code) from all types of locations (private s3 buckets, public websites).
**[Document transformers](/docs/modules/data_connection/document_transformers/)**
@ -32,18 +32,18 @@ LangChain provides several different algorithms for doing this, as well as logic
**[Text embedding models](/docs/modules/data_connection/text_embedding/)**
Another key part of retrieval has become creating embeddings for documents.
Embeddings capture the semantic meaning of text, allowing you to quickly and
Embeddings capture the semantic meaning of the text, allowing you to quickly and
efficiently find other pieces of text that are similar.
LangChain provides integrations with over 25 different embedding providers and methods,
from open-source to proprietary API,
allowing you to choose the one best suited for your needs.
LangChain exposes a standard interface, allowing you to easily swap between models.
LangChain provides a standard interface, allowing you to easily swap between models.
**[Vector stores](/docs/modules/data_connection/vectorstores/)**
With the rise of embeddings, there has emerged a need for databases to support efficient storage and searching of these embeddings.
LangChain provides integrations with over 50 different vectorstores, from open-source local ones to cloud-hosted proprietary ones,
allowing you choose the one best suited for your needs.
allowing you to choose the one best suited for your needs.
LangChain exposes a standard interface, allowing you to easily swap between vector stores.
**[Retrievers](/docs/modules/data_connection/retrievers/)**
@ -55,7 +55,7 @@ However, we have also added a collection of algorithms on top of this to increas
These include:
- [Parent Document Retriever](/docs/modules/data_connection/retrievers/parent_document_retriever): This allows you to create multiple embeddings per parent document, allowing you to look up smaller chunks but return larger context.
- [Self Query Retriever](/docs/modules/data_connection/retrievers/self_query): User questions often contain reference to something that isn't just semantic, but rather expresses some logic that can best be represented as a metadata filter. Self-query allows you to parse out the *semantic* part of a query from other *metadata filters* present in the query
- [Self Query Retriever](/docs/modules/data_connection/retrievers/self_query): User questions often contain a reference to something that isn't just semantic but rather expresses some logic that can best be represented as a metadata filter. Self-query allows you to parse out the *semantic* part of a query from other *metadata filters* present in the query.
- [Ensemble Retriever](/docs/modules/data_connection/retrievers/ensemble): Sometimes you may want to retrieve documents from multiple different sources, or using multiple different algorithms. The ensemble retriever allows you to easily do this.
- And more!

@ -5,10 +5,10 @@ One challenge with retrieval is that usually you don't know the specific queries
Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.
To use the Contextual Compression Retriever, you'll need:
- a base Retriever
- a base retriever
- a Document Compressor
The Contextual Compression Retriever passes queries to the base Retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of Documents and shortens it by reducing the contents of Documents or dropping Documents altogether.
The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.
![](https://drive.google.com/uc?id=1CtNgWODXZudxAWSRiWgSGEoTNrUFT98v)

@ -8,7 +8,7 @@ Head to [Integrations](/docs/integrations/retrievers/) for documentation on buil
:::
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used
A retriever does not need to be able to store documents, only to return (or retrieve) them. Vector stores can be used
as the backbone of a retriever, but there are other types of retrievers as well.
## Get started

@ -1,6 +1,6 @@
# Self-querying
A self-querying retriever is one that, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to it's underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documented, but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
A self-querying retriever is one that, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to its underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documents but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
![](https://drive.google.com/uc?id=1OQUN-0MJcDUxmPXofgS7MqReEs720pqS)

@ -8,7 +8,7 @@ The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ^ hours_passed
```
Notably, `hours_passed` refers to the hours passed since the object in the retriever **was last accessed**, not since it was created. This means that frequently accessed objects remain "fresh."
Notably, `hours_passed` refers to the hours passed since the object in the retriever **was last accessed**, not since it was created. This means that frequently accessed objects remain "fresh".
import Example from "@snippets/modules/data_connection/retrievers/how_to/time_weighted_vectorstore.mdx"

@ -1,9 +1,9 @@
# Vector store-backed retriever
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the Vector Store class to make it conform to the Retriever interface.
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.
Once you construct a Vector store, it's very easy to construct a retriever. Let's walk through an example.
Once you construct a vector store, it's very easy to construct a retriever. Let's walk through an example.
import Example from "@snippets/modules/data_connection/retrievers/how_to/vectorstore.mdx"

@ -11,7 +11,7 @@ The Embeddings class is a class designed for interfacing with text embedding mod
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
## Get started

@ -16,7 +16,7 @@ for you.
## Get started
This walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model](/docs/modules/data_connection/text_embedding/) interfaces before diving into this.
This walkthrough showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model](/docs/modules/data_connection/text_embedding/) interfaces before diving into this.
import GetStarted from "@snippets/modules/data_connection/vectorstores/get_started.mdx"

@ -8,10 +8,10 @@ Head to [Integrations](/docs/integrations/memory/) for documentation on built-in
:::
One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all.
This is a super lightweight wrapper which provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all.
You may want to use this class directly if you are managing memory outside of a chain.
import GetStarted from "@snippets/modules/memory/chat_messages/get_started.mdx"
<GetStarted/>
<GetStarted/>

@ -32,7 +32,7 @@ Even if these are not all used directly, they need to be stored in some form.
One of the key parts of the LangChain memory module is a series of integrations for storing these chat messages,
from in-memory lists to persistent databases.
- [Chat message storage](/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered
- [Chat message storage](/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered.
### Querying: Data structures and algorithms on top of chat messages
Keeping a list of chat messages is fairly straight-forward.

@ -1,6 +1,6 @@
# Conversation Buffer
This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing of messages and then extracts the messages in a variable.
This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing messages and then extracts the messages in a variable.
We can first extract it as a string.

@ -1,6 +1,6 @@
# Conversation Buffer Window
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large.
Let's first explore the basic functionality of this type of memory.

@ -1,6 +1,6 @@
# Entity
Entity Memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
Entity memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
Let's first walk through using this functionality.

@ -1,7 +1,7 @@
---
sidebar_position: 2
---
# Memory Types
# Memory types
There are many different types of memory.
Each has their own parameters, their own return types, and is useful in different scenarios.

@ -1,6 +1,6 @@
# Backed by a Vector Store
`VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.
`VectorStoreRetrieverMemory` stores memories in a vector store and queries the top-K most "salient" docs every time it is called.
This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.

@ -1,5 +1,5 @@
# Caching
LangChain provides an optional caching layer for Chat Models. This is useful for two reasons:
LangChain provides an optional caching layer for chat models. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make to the LLM provider.

@ -8,8 +8,8 @@ Head to [Integrations](/docs/integrations/chat/) for documentation on built-in i
:::
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
While chat models use language models under the hood, the interface they use is a bit different.
Rather than using a "text in, text out" API, they use an interface where "chat messages" are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.

@ -1,6 +1,6 @@
# Prompts
Prompts for Chat models are built around messages, instead of just plain text.
Prompts for chat models are built around messages, instead of just plain text.
import Prompts from "@snippets/modules/model_io/models/chat/how_to/prompts.mdx"

@ -1,6 +1,6 @@
# Streaming
Some Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
Some chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
import StreamingChatModel from "@snippets/modules/model_io/models/chat/how_to/streaming.mdx"

@ -8,16 +8,16 @@ LangChain provides interfaces and integrations for two types of models:
- [LLMs](/docs/modules/model_io/models/llms/): Models that take a text string as input and return a text string
- [Chat models](/docs/modules/model_io/models/chat/): Models that are backed by a language model but take a list of Chat Messages as input and return a Chat Message
## LLMs vs Chat Models
## LLMs vs chat models
LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
LLMs and chat models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM.
Chat models are often backed by LLMs but tuned specifically for having conversations.
And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string,
And, crucially, their provider APIs use a different interface than pure text completion models. Instead of a single string,
they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System",
"AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.
"AI", and "Human"). And they return an AI chat message as output. GPT-4 and Anthropic's Claude are both implemented as chat models.
To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common
To make it possible to swap LLMs and chat models, both implement the Base Language Model interface. This includes common
methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message.
If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models),
If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for chat models),
but if you're creating an application that should work with different types of models the shared interface can be helpful.

@ -5,7 +5,7 @@ sidebar_position: 2
# Store and reference chat history
The ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question answering chain to return a response.
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question-answering chain to return a response.
To create one, you will need a retriever. In the below example, we will create one from a vector store, which can be created from embeddings.

@ -6,4 +6,4 @@ sidebar_position: 3
Web scraping has historically been a challenging endeavor due to the ever-changing nature of website structures, making it tedious for developers to maintain their scraping scripts. Traditional methods often rely on specific HTML tags and patterns which, when altered, can disrupt data extraction processes.
Enter the LLM-based method for parsing HTML: By leveraging the capabilities of LLMs, and especially OpenAI Functions in LangChain's extraction chain, developers can instruct the model to extract only the desired data in a specified format. This method not only streamlines the extraction process but also significantly reduces the time spent on manual debugging and script modifications. Its adaptability means that even if websites undergo significant design changes, the extraction remains consistent and robust. This level of resilience translates to reduced maintenance efforts, cost savings, and ensures a higher quality of extracted data. Compared to its predecessors, LLM-based approach wins out the web scraping domain by transforming a historically cumbersome task into a more automated and efficient process.
Enter the LLM-based method for parsing HTML: By leveraging the capabilities of LLMs, and especially OpenAI Functions in LangChain's extraction chain, developers can instruct the model to extract only the desired data in a specified format. This method not only streamlines the extraction process but also significantly reduces the time spent on manual debugging and script modifications. Its adaptability means that even if websites undergo significant design changes, the extraction remains consistent and robust. This level of resilience translates to reduced maintenance efforts, cost savings, and ensures a higher quality of extracted data. Compared to its predecessors, the LLM-based approach wins out in the web scraping domain by transforming a historically cumbersome task into a more automated and efficient process.

@ -46,23 +46,23 @@ module.exports = {
},
{
type: "category",
label: "Guides",
label: "LangChain Expression Language",
collapsed: true,
items: [{ type: "autogenerated", dirName: "guides" }],
items: [{ type: "autogenerated", dirName: "expression_language" } ],
link: {
type: 'generated-index',
description: 'Design guides for key parts of the development process',
slug: "guides",
type: 'doc',
id: "expression_language/index"
},
},
{
type: "category",
label: "Ecosystem",
label: "Guides",
collapsed: true,
items: [{ type: "autogenerated", dirName: "ecosystem" }],
items: [{ type: "autogenerated", dirName: "guides" }],
link: {
type: 'generated-index',
slug: "ecosystem",
description: 'Design guides for key parts of the development process',
slug: "guides",
},
},
{
@ -72,7 +72,7 @@ module.exports = {
items: [{ type: "autogenerated", dirName: "additional_resources" }, { type: "link", label: "Gallery", href: "https://github.com/kyrolabs/awesome-langchain" }],
link: {
type: 'generated-index',
slug: "additional_resources",
slug: "additional_resources",
},
},
'community'

@ -2952,6 +2952,46 @@
"source": "/docs/modules/model_io/models/llms/integrations/writer",
"destination": "/docs/integrations/llms/writer"
},
{
"source": "/docs/integrations/llms/amazon_api_gateway_example",
"destination": "/docs/integrations/llms/amazon_api_gateway"
},
{
"source": "/docs/integrations/llms/azureml_endpoint_example",
"destination": "/docs/integrations/llms/azure_ml"
},
{
"source": "/docs/integrations/llms/azure_openai_example",
"destination": "/docs/integrations/llms/azure_openai"
},
{
"source": "/docs/integrations/llms/cerebriumai_example",
"destination": "/docs/integrations/llms/cerebriumai"
},
{
"source": "/docs/integrations/llms/deepinfra_example",
"destination": "/docs/integrations/llms/deepinfra"
},
{
"source": "/docs/integrations/llms/Fireworks",
"destination": "/docs/integrations/llms/fireworks"
},
{
"source": "/docs/integrations/llms/forefrontai_example",
"destination": "/docs/integrations/llms/forefrontai"
},
{
"source": "/docs/integrations/llms/gooseai_example",
"destination": "/docs/integrations/llms/gooseai"
},
{
"source": "/docs/integrations/llms/petals_example",
"destination": "/docs/integrations/llms/petals"
},
{
"source": "/docs/integrations/llms/pipelineai_example",
"destination": "/docs/integrations/llms/pipelineai"
},
{
"source": "/en/latest/modules/prompts.html",
"destination": "/docs/modules/model_io/prompts"
@ -3436,6 +3476,14 @@
"source": "/docs/modules/chains/additional/graph_kuzu_qa",
"destination": "/docs/use_cases/more/graph/graph_kuzu_qa"
},
{
"source": "/docs/use_cases/graph/graph_falkordb_qa",
"destination": "/docs/use_cases/more/graph/graph_falkordb_qa"
},
{
"source": "/docs/modules/chains/additional/graph_falkordb_qa",
"destination": "/docs/use_cases/more/graph/graph_falkordb_qa"
},
{
"source": "/docs/use_cases/graph/graph_nebula_qa",
"destination": "/docs/use_cases/more/graph/graph_nebula_qa"
@ -3547,6 +3595,18 @@
{
"source": "/en/latest/integrations/:path*",
"destination": "/docs/integrations/providers/:path*"
},
{
"source": "/docs/guides/expression_language(/?)",
"destination": "/docs/expression_language/"
},
{
"source": "/docs/guides/expression_language/:path*",
"destination": "/docs/expression_language/:path*"
},
{
"source": "/docs/ecosystem/dependents",
"destination": "/docs/additional_resources/dependents"
}
]
}

@ -47,7 +47,7 @@ from langchain.embeddings import integration_class_REPLACE_ME
```
## Chat Models
## Chat models
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)

@ -2,7 +2,7 @@
If you're building with LLMs, at some point something will break, and you'll need to debug. A model call will fail, or the model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.
Here's a few different tools and functionalities to aid in debugging.
Here are a few different tools and functionalities to aid in debugging.
@ -18,9 +18,9 @@ For anyone building production-grade LLM applications, we highly recommend using
If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run.
There's a number of ways to enable printing at varying degrees of verbosity.
There are a number of ways to enable printing at varying degrees of verbosity.
Let's suppose we have a simple agent and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:
Let's suppose we have a simple agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:
```python

@ -14,7 +14,7 @@ It also contains instructions for how to deploy this app on the Streamlit platfo
## [Gradio (on Hugging Face)](https://github.com/hwchase17/langchain-gradio-template)
This repo serves as a template for how deploy a LangChain with Gradio.
This repo serves as a template for how to deploy a LangChain with Gradio.
It implements a chatbot interface, with a "Bring-Your-Own-Token" approach (nice for not wracking up big bills).
It also contains instructions for how to deploy this app on the Hugging Face platform.
This is heavily influenced by James Weaver's [excellent examples](https://huggingface.co/JavaFXpert).
@ -27,7 +27,7 @@ Chainlit [doc](https://docs.chainlit.io/langchain) on the integration with LangC
## [Beam](https://github.com/slai-labs/get-beam/tree/main/examples/langchain-question-answering)
This repo serves as a template for how deploy a LangChain with [Beam](https://beam.cloud).
This repo serves as a template for how to deploy a LangChain with [Beam](https://beam.cloud).
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
@ -49,7 +49,7 @@ A minimal example of how to deploy LangChain to [Fly.io](https://fly.io/) using
## [Digitalocean App Platform](https://github.com/homanp/digitalocean-langchain)
A minimal example on how to deploy LangChain to DigitalOcean App Platform.
A minimal example of how to deploy LangChain to DigitalOcean App Platform.
## [CI/CD Google Cloud Build + Dockerfile + Serverless Google Cloud Run](https://github.com/g-emarco/github-assistant)
@ -57,7 +57,7 @@ Boilerplate LangChain project on how to deploy to Google Cloud Run using Docker
## [Google Cloud Run](https://github.com/homanp/gcp-langchain)
A minimal example on how to deploy LangChain to Google Cloud Run.
A minimal example of how to deploy LangChain to Google Cloud Run.
## [SteamShip](https://github.com/steamship-core/steamship-langchain/)
@ -82,4 +82,4 @@ These templates serve as examples of how to build, deploy, and share LangChain a
## [AzureML Online Endpoint](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/llm/langchain/1_langchain_basic_deploy.ipynb)
A minimal example of how to deploy LangChain to an Azure Machine Learning Online Endpoint.
A minimal example of how to deploy LangChain to an Azure Machine Learning Online Endpoint.

@ -146,7 +146,7 @@
"source": [
"## Environment\n",
"\n",
"Inference speed is a chllenge when running models locally (see above).\n",
"Inference speed is a challenge when running models locally (see above).\n",
"\n",
"To minimize latency, it is desiable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n",
"\n",
@ -264,88 +264,19 @@
"metadata": {},
"outputs": [],
"source": [
"pip install llama-cpp-python"
"CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dirclear"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "9d5f94b5",
"execution_count": null,
"id": "a88bf0c8-e989-4bcd-bcb7-4d7757e684f2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"objc[10142]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x2a0c4c208) and /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib (0x2c28bc208). One of the two will be used. Which one is undefined.\n",
"llama.cpp: loading model from /Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\n",
"llama_model_load_internal: format = ggjt v3 (latest)\n",
"llama_model_load_internal: n_vocab = 32000\n",
"llama_model_load_internal: n_ctx = 2048\n",
"llama_model_load_internal: n_embd = 5120\n",
"llama_model_load_internal: n_mult = 256\n",
"llama_model_load_internal: n_head = 40\n",
"llama_model_load_internal: n_layer = 40\n",
"llama_model_load_internal: n_rot = 128\n",
"llama_model_load_internal: freq_base = 10000.0\n",
"llama_model_load_internal: freq_scale = 1\n",
"llama_model_load_internal: ftype = 2 (mostly Q4_0)\n",
"llama_model_load_internal: n_ff = 13824\n",
"llama_model_load_internal: model size = 13B\n",
"llama_model_load_internal: ggml ctx size = 0.09 MB\n",
"llama_model_load_internal: mem required = 8953.71 MB (+ 1608.00 MB per state)\n",
"llama_new_context_with_model: kv self size = 1600.00 MB\n",
"ggml_metal_init: allocating\n",
"ggml_metal_init: using MPS\n",
"ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal'\n",
"ggml_metal_init: loaded kernel_add 0x47774af60\n",
"ggml_metal_init: loaded kernel_mul 0x47774bc00\n",
"ggml_metal_init: loaded kernel_mul_row 0x47774c230\n",
"ggml_metal_init: loaded kernel_scale 0x47774c890\n",
"ggml_metal_init: loaded kernel_silu 0x47774cef0\n",
"ggml_metal_init: loaded kernel_relu 0x10e33e500\n",
"ggml_metal_init: loaded kernel_gelu 0x47774b2f0\n",
"ggml_metal_init: loaded kernel_soft_max 0x47771a580\n",
"ggml_metal_init: loaded kernel_diag_mask_inf 0x47774dab0\n",
"ggml_metal_init: loaded kernel_get_rows_f16 0x47774e110\n",
"ggml_metal_init: loaded kernel_get_rows_q4_0 0x47774e7d0\n",
"ggml_metal_init: loaded kernel_get_rows_q4_1 0x13efd7170\n",
"ggml_metal_init: loaded kernel_get_rows_q2_K 0x13efd73d0\n",
"ggml_metal_init: loaded kernel_get_rows_q3_K 0x13efd7630\n",
"ggml_metal_init: loaded kernel_get_rows_q4_K 0x13efd7890\n",
"ggml_metal_init: loaded kernel_get_rows_q5_K 0x4744c9740\n",
"ggml_metal_init: loaded kernel_get_rows_q6_K 0x4744ca6b0\n",
"ggml_metal_init: loaded kernel_rms_norm 0x4744cb250\n",
"ggml_metal_init: loaded kernel_norm 0x4744cb970\n",
"ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x10e33f700\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x10e33fcd0\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x4744cc2d0\n",
"ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x4744cc6f0\n",
"ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x4744cd6b0\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x4744cde20\n",
"ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x10e33ff30\n",
"ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x10e340190\n",
"ggml_metal_init: loaded kernel_rope 0x10e3403f0\n",
"ggml_metal_init: loaded kernel_alibi_f32 0x10e340de0\n",
"ggml_metal_init: loaded kernel_cpy_f32_f16 0x10e3416d0\n",
"ggml_metal_init: loaded kernel_cpy_f32_f32 0x10e342080\n",
"ggml_metal_init: loaded kernel_cpy_f16_f16 0x10e342ca0\n",
"ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB\n",
"ggml_metal_init: hasUnifiedMemory = true\n",
"ggml_metal_init: maxTransferRate = built-in GPU\n",
"ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6986.19 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1032.00 MB, ( 8018.19 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, ( 9620.19 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 426.00 MB, (10046.19 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (10558.19 / 21845.34)\n",
"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | \n"
]
}
],
"outputs": [],
"source": [
"from langchain.llms import LlamaCpp\n",
"llm = LlamaCpp(\n",
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\",\n",
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n",
" n_gpu_layers=1,\n",
" n_batch=512,\n",
" n_ctx=2048,\n",
@ -448,87 +379,10 @@
},
{
"cell_type": "code",
"execution_count": 46,
"id": "b55a2147",
"execution_count": null,
"id": "915ecd4c-8f6b-4de3-a787-b64cb7c682b4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found model file at /Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\n",
"llama_new_context_with_model: max tensor size = 87.89 MB\n",
"llama_new_context_with_model: max tensor size = 87.89 MB\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"llama.cpp: using Metal\n",
"llama.cpp: loading model from /Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\n",
"llama_model_load_internal: format = ggjt v3 (latest)\n",
"llama_model_load_internal: n_vocab = 32001\n",
"llama_model_load_internal: n_ctx = 2048\n",
"llama_model_load_internal: n_embd = 5120\n",
"llama_model_load_internal: n_mult = 256\n",
"llama_model_load_internal: n_head = 40\n",
"llama_model_load_internal: n_layer = 40\n",
"llama_model_load_internal: n_rot = 128\n",
"llama_model_load_internal: ftype = 2 (mostly Q4_0)\n",
"llama_model_load_internal: n_ff = 13824\n",
"llama_model_load_internal: n_parts = 1\n",
"llama_model_load_internal: model size = 13B\n",
"llama_model_load_internal: ggml ctx size = 0.09 MB\n",
"llama_model_load_internal: mem required = 9031.71 MB (+ 1608.00 MB per state)\n",
"llama_new_context_with_model: kv self size = 1600.00 MB\n",
"ggml_metal_init: allocating\n",
"ggml_metal_init: using MPS\n",
"ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/ggml-metal.metal'\n",
"ggml_metal_init: loaded kernel_add 0x37944d850\n",
"ggml_metal_init: loaded kernel_mul 0x37944f350\n",
"ggml_metal_init: loaded kernel_mul_row 0x37944fdd0\n",
"ggml_metal_init: loaded kernel_scale 0x3794505a0\n",
"ggml_metal_init: loaded kernel_silu 0x379450800\n",
"ggml_metal_init: loaded kernel_relu 0x379450a60\n",
"ggml_metal_init: loaded kernel_gelu 0x379450cc0\n",
"ggml_metal_init: loaded kernel_soft_max 0x379450ff0\n",
"ggml_metal_init: loaded kernel_diag_mask_inf 0x379451250\n",
"ggml_metal_init: loaded kernel_get_rows_f16 0x3794514b0\n",
"ggml_metal_init: loaded kernel_get_rows_q4_0 0x379451710\n",
"ggml_metal_init: loaded kernel_get_rows_q4_1 0x379451970\n",
"ggml_metal_init: loaded kernel_get_rows_q2_k 0x379451bd0\n",
"ggml_metal_init: loaded kernel_get_rows_q3_k 0x379451e30\n",
"ggml_metal_init: loaded kernel_get_rows_q4_k 0x379452090\n",
"ggml_metal_init: loaded kernel_get_rows_q5_k 0x3794522f0\n",
"ggml_metal_init: loaded kernel_get_rows_q6_k 0x379452550\n",
"ggml_metal_init: loaded kernel_rms_norm 0x3794527b0\n",
"ggml_metal_init: loaded kernel_norm 0x379452a10\n",
"ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x379452c70\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x379452ed0\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x379453130\n",
"ggml_metal_init: loaded kernel_mul_mat_q2_k_f32 0x379453390\n",
"ggml_metal_init: loaded kernel_mul_mat_q3_k_f32 0x3794535f0\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_k_f32 0x379453850\n",
"ggml_metal_init: loaded kernel_mul_mat_q5_k_f32 0x379453ab0\n",
"ggml_metal_init: loaded kernel_mul_mat_q6_k_f32 0x379453d10\n",
"ggml_metal_init: loaded kernel_rope 0x379453f70\n",
"ggml_metal_init: loaded kernel_alibi_f32 0x3794541d0\n",
"ggml_metal_init: loaded kernel_cpy_f32_f16 0x379454430\n",
"ggml_metal_init: loaded kernel_cpy_f32_f32 0x379454690\n",
"ggml_metal_init: loaded kernel_cpy_f16_f16 0x3794548f0\n",
"ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB\n",
"ggml_metal_init: hasUnifiedMemory = true\n",
"ggml_metal_init: maxTransferRate = built-in GPU\n",
"ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, (17542.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1024.00 MB, (18566.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, (20168.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 512.00 MB, (20680.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (21192.94 / 21845.34)\n",
"ggml_metal_free: deallocating\n"
]
}
],
"outputs": [],
"source": [
"from langchain.llms import GPT4All\n",
"llm = GPT4All(model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\")"
@ -564,89 +418,21 @@
"\n",
"Some LLMs will benefit from specific prompts.\n",
"\n",
"For example, llama2 can use [special tokens](https://twitter.com/RLanceMartin/status/1681879318493003776?s=20).\n",
"For example, LLaMA will use [special tokens](https://twitter.com/RLanceMartin/status/1681879318493003776?s=20).\n",
"\n",
"We can use `ConditionalPromptSelector` to set prompt based on the model type."
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "d082b10a",
"execution_count": null,
"id": "16759b7c-7903-4269-b7b4-f83b313d8091",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"llama.cpp: loading model from /Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\n",
"llama_model_load_internal: format = ggjt v3 (latest)\n",
"llama_model_load_internal: n_vocab = 32000\n",
"llama_model_load_internal: n_ctx = 2048\n",
"llama_model_load_internal: n_embd = 5120\n",
"llama_model_load_internal: n_mult = 256\n",
"llama_model_load_internal: n_head = 40\n",
"llama_model_load_internal: n_layer = 40\n",
"llama_model_load_internal: n_rot = 128\n",
"llama_model_load_internal: freq_base = 10000.0\n",
"llama_model_load_internal: freq_scale = 1\n",
"llama_model_load_internal: ftype = 2 (mostly Q4_0)\n",
"llama_model_load_internal: n_ff = 13824\n",
"llama_model_load_internal: model size = 13B\n",
"llama_model_load_internal: ggml ctx size = 0.09 MB\n",
"llama_model_load_internal: mem required = 8953.71 MB (+ 1608.00 MB per state)\n",
"llama_new_context_with_model: kv self size = 1600.00 MB\n",
"ggml_metal_init: allocating\n",
"ggml_metal_init: using MPS\n",
"ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal'\n",
"ggml_metal_init: loaded kernel_add 0x4744d09d0\n",
"ggml_metal_init: loaded kernel_mul 0x3781cb3d0\n",
"ggml_metal_init: loaded kernel_mul_row 0x37813bb60\n",
"ggml_metal_init: loaded kernel_scale 0x474481080\n",
"ggml_metal_init: loaded kernel_silu 0x4744d29f0\n",
"ggml_metal_init: loaded kernel_relu 0x3781254c0\n",
"ggml_metal_init: loaded kernel_gelu 0x47447f280\n",
"ggml_metal_init: loaded kernel_soft_max 0x4744cf470\n",
"ggml_metal_init: loaded kernel_diag_mask_inf 0x4744cf6d0\n",
"ggml_metal_init: loaded kernel_get_rows_f16 0x4744cf930\n",
"ggml_metal_init: loaded kernel_get_rows_q4_0 0x4744cfb90\n",
"ggml_metal_init: loaded kernel_get_rows_q4_1 0x4744cfdf0\n",
"ggml_metal_init: loaded kernel_get_rows_q2_K 0x4744d0050\n",
"ggml_metal_init: loaded kernel_get_rows_q3_K 0x4744ce980\n",
"ggml_metal_init: loaded kernel_get_rows_q4_K 0x4744cebe0\n",
"ggml_metal_init: loaded kernel_get_rows_q5_K 0x4744cee40\n",
"ggml_metal_init: loaded kernel_get_rows_q6_K 0x4744cf0a0\n",
"ggml_metal_init: loaded kernel_rms_norm 0x474482450\n",
"ggml_metal_init: loaded kernel_norm 0x4744826b0\n",
"ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x474482910\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x474482b70\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x474482dd0\n",
"ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x474483030\n",
"ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x474483290\n",
"ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x4744834f0\n",
"ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x474483750\n",
"ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x4744839b0\n",
"ggml_metal_init: loaded kernel_rope 0x474483c10\n",
"ggml_metal_init: loaded kernel_alibi_f32 0x474483e70\n",
"ggml_metal_init: loaded kernel_cpy_f32_f16 0x4744840d0\n",
"ggml_metal_init: loaded kernel_cpy_f32_f32 0x474484330\n",
"ggml_metal_init: loaded kernel_cpy_f16_f16 0x474484590\n",
"ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB\n",
"ggml_metal_init: hasUnifiedMemory = true\n",
"ggml_metal_init: maxTransferRate = built-in GPU\n",
"ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6986.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1032.00 MB, ( 8018.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, ( 9620.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 426.00 MB, (10046.94 / 21845.34)\n",
"ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (10558.94 / 21845.34)\n",
"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | \n"
]
}
],
"outputs": [],
"source": [
"# Set our LLM\n",
"llm = LlamaCpp(\n",
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\",\n",
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n",
" n_gpu_layers=1,\n",
" n_batch=512,\n",
" n_ctx=2048,\n",
@ -661,7 +447,7 @@
"id": "66656084",
"metadata": {},
"source": [
"Set the associated prompt."
"Set the associated prompt based upon the model version."
]
},
{
@ -759,6 +545,18 @@
"llm_chain.run({\"question\":question})"
]
},
{
"cell_type": "markdown",
"id": "6e0d37e7-f1d9-4848-bf2c-c22392ee141f",
"metadata": {},
"source": [
"We also can use the LangChain Prompt Hub to fetch and / or store prompts that are model specific.\n",
"\n",
"This will work with your [LangSmith API key](https://docs.smith.langchain.com/).\n",
"\n",
"For example, [here](https://smith.langchain.com/hub/rlm/rag-prompt-llama) is a prompt for RAG with LLaMA-specific tokens."
]
},
{
"cell_type": "markdown",
"id": "6ba66260",
@ -770,16 +568,12 @@
"\n",
"For example, here is a guide to [RAG](docs/use_cases/question_answering/how_to/local_retrieval_qa) with local LLMs.\n",
"\n",
"In general, use cases for local model can be driven by at least two factors:\n",
"In general, use cases for local LLMs can be driven by at least two factors:\n",
"\n",
"* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n",
"* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
"\n",
"There are a few approach to support specific use-cases: \n",
"\n",
"* Fine-tuning (e.g., [gpt-llm-trainer](https://github.com/mshumer/gpt-llm-trainer), [Anyscale](https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications)) \n",
"* [Function-calling](https://github.com/MeetKai/functionary/tree/main) for use-cases like extraction or tagging\n",
"\n"
"In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open source LLMs."
]
}
],

@ -93,7 +93,7 @@
"metadata": {},
"source": [
"## Usage\n",
"### Using the Context callback within a Chat Model\n",
"### Using the Context callback within a chat model\n",
"\n",
"The Context callback handler can be used to directly record transcripts between users and AI assistants.\n",
"\n",

File diff suppressed because one or more lines are too long

@ -0,0 +1,106 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# Bedrock Chat\n",
"\n",
"[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d51edc81",
"metadata": {},
"outputs": [],
"source": [
"%pip install boto3"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chat_models import BedrockChat\n",
"from langchain.schema import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat = BedrockChat(model_id=\"anthropic.claude-v2\", model_kwargs={\"temperature\":0.1})"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" Voici la traduction en français : J'adore programmer.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -102,13 +102,34 @@
"loader.load()"
]
},
{
"cell_type": "markdown",
"source": [
"## Configuring the AWS Boto3 client\n",
"You can configure the AWS [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) client by passing\n",
"named arguments when creating the S3DirectoryLoader.\n",
"This is useful for instance when AWS credentials can't be set as environment variables.\n",
"See the [list of parameters](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session) that can be configured."
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"loader = S3DirectoryLoader(\"testing-hwc\", aws_access_key_id=\"xxxx\", aws_secret_access_key=\"yyyy\")"
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"id": "885dc280",
"metadata": {},
"outputs": [],
"source": []
"source": [
"loader.load()"
],
"metadata": {}
}
],
"metadata": {

@ -66,12 +66,34 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"id": "93689594",
"metadata": {},
"source": [
"## Configuring the AWS Boto3 client\n",
"You can configure the AWS [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) client by passing\n",
"named arguments when creating the S3DirectoryLoader.\n",
"This is useful for instance when AWS credentials can't be set as environment variables.\n",
"See the [list of parameters](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session) that can be configured."
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": []
"source": [
"loader = S3FileLoader(\"testing-hwc\", \"fake.docx\", aws_access_key_id=\"xxxx\", aws_secret_access_key=\"yyyy\")"
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"loader.load()"
],
"metadata": {}
}
],
"metadata": {

@ -0,0 +1,138 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Document Intelligence"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Azure Document Intelligence (formerly known as Azure Forms Recognizer) is machine-learning \n",
"based service that extracts text (including handwriting), tables or key-value-pairs from\n",
"scanned documents or images.\n",
"\n",
"This current implementation of a loader using Document Intelligence is able to incorporate content page-wise and turn it into LangChain documents.\n",
"\n",
"Document Intelligence supports PDF, JPEG, PNG, BMP, or TIFF.\n",
"\n",
"Further documentation is available at https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/?view=doc-intel-3.1.0.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install langchain azure-ai-formrecognizer -q"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 1"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The first example uses a local file which will be sent to Azure Document Intelligence.\n",
"\n",
"First, an instance of a DocumentAnalysisClient is created with endpoint and key for the Azure service. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azure.ai.formrecognizer import DocumentAnalysisClient\n",
"from azure.core.credentials import AzureKeyCredential\n",
"\n",
"document_analysis_client = DocumentAnalysisClient(\n",
" endpoint=\"<service_endpoint>\", credential=AzureKeyCredential(\"<service_key>\")\n",
" )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"With the initialized document analysis client, we can proceed to create an instance of the DocumentIntelligenceLoader:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.pdf import DocumentIntelligenceLoader\n",
"loader = DocumentIntelligenceLoader(\n",
" \"<Local_filename>\",\n",
" client=document_analysis_client,\n",
" model=\"<model_name>\") # e.g. prebuilt-document\n",
"\n",
"documents = loader.load()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The output contains each page of the source document as a LangChain document: "
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='...', metadata={'source': '...', 'page': 1})]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"documents"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.5"
},
"vscode": {
"interpreter": {
"hash": "f9f85f796d01129d0dd105a088854619f454435301f6ffec2fea96ecbd9be4ac"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

@ -210,7 +210,7 @@
"id": "83ac576b-48c9-4aad-a35e-e978ea32f746",
"metadata": {},
"source": [
"# Extended usage\n",
"## Extended usage\n",
"An external component can manage the complexity of Google Drive : `langchain-googledrive`\n",
"It's compatible with the ̀`langchain.document_loaders.GoogleDriveLoader` and can be used\n",
"in its place.\n",
@ -319,7 +319,7 @@
"id": "cd13d7d1-db7a-498d-ac98-76ccd9ad9019",
"metadata": {},
"source": [
"## Customize the search pattern\n",
"### Customize the search pattern\n",
"\n",
"All parameter compatible with Google [`list()`](https://developers.google.com/drive/api/v3/reference/files/list)\n",
"API can be set.\n",
@ -398,7 +398,7 @@
"id": "375bb465-8f69-407b-94bd-ffa3718ef500",
"metadata": {},
"source": [
"### Modes for GSlide and GSheet\n",
"#### Modes for GSlide and GSheet\n",
"The parameter mode accepts different values:\n",
"\n",
"- \"document\": return the body of each document\n",
@ -469,7 +469,7 @@
"id": "09acb864-e919-4add-9e06-deba6f7f0cd8",
"metadata": {},
"source": [
"## Advanced usage\n",
"### Advanced usage\n",
"All Google File have a 'description' in the metadata. This field can be used to memorize a summary of the document or others indexed tags (See method `lazy_update_description_with_summary()`).\n",
"\n",
"If you use the `mode=\"snippet\"`, only the description will be used for the body. Else, the `metadata['summary']` has the field.\n",
@ -525,7 +525,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

@ -221,9 +221,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.15"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}

@ -4,9 +4,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# AzureML Online Endpoint\n",
"# Azure ML\n",
"\n",
"[AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
"[Azure ML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
"\n",
"This notebook goes over how to use an LLM hosted on an `AzureML online endpoint`"
]
@ -236,9 +236,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -216,7 +216,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

@ -4,11 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# NIBittensorLLM\n",
"# Bittensor\n",
"\n",
"NIBittensorLLM is developed by [Neural Internet](https://neuralinternet.ai/), powered by [Bittensor](https://bittensor.com/).\n",
">[Bittensor](https://bittensor.com/) is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.\n",
">\n",
">`NIBittensorLLM` is developed by [Neural Internet](https://neuralinternet.ai/), powered by `Bittensor`.\n",
"\n",
"This LLM showcases true potential of decentralized AI by giving you the best response(s) from the Bittensor protocol, which consist of various AI models such as OpenAI, LLaMA2 etc.\n",
">This LLM showcases true potential of decentralized AI by giving you the best response(s) from the `Bittensor protocol`, which consist of various AI models such as `OpenAI`, `LLaMA2` etc.\n",
"\n",
"Users can view their logs, requests, and API keys on the [Validator Endpoint Frontend](https://api.neuralinternet.ai/). However, changes to the configuration are currently prohibited; otherwise, the user's queries will be blocked.\n",
"\n",
@ -157,11 +159,24 @@
}
],
"metadata": {
"language_info": {
"name": "python"
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"orig_nbformat": 4
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

@ -206,13 +206,86 @@
"\n",
"llm_chain.run(question)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using models deployed on Vertex Model Garden"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import VertexAIModelGarden"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = VertexAIModelGarden(\n",
" project=\"YOUR PROJECT\",\n",
" endpoint_id=\"YOUR ENDPOINT_ID\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like all LLMs, we can then compose it with other components:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_oss_chain = prompt | llm\n",
"\n",
"llm_oss_chain.invoke({\"thing\": \"life\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv",
"language": "python",
"name": "python3"
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
@ -224,7 +297,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.1"
},
"vscode": {
"interpreter": {

@ -1,13 +1,16 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# OctoAI Compute Service\n",
"# OctoAI\n",
"\n",
">[OctoML](https://docs.octoai.cloud/docs) is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The `OctoAI` compute service helps you run, tune, and scale AI applications.\n",
"\n",
"This example goes over how to use LangChain to interact with `OctoAI` [LLM endpoints](https://octoai.cloud/templates)\n",
"## Environment setup\n",
"\n",
"## Setup\n",
"\n",
"To run our example app, there are four simple steps to take:\n",
"\n",
@ -43,6 +46,13 @@
"from langchain import PromptTemplate, LLMChain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "code",
"execution_count": 15,
@ -98,7 +108,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -112,9 +122,8 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.12"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "97697b63fdcee0a640856f91cb41326ad601964008c341809e43189d1cab1047"
@ -122,5 +131,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -186,7 +186,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

@ -1,23 +1,25 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# PipelineAI\n",
"\n",
"PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to [several LLM models](https://pipeline.ai).\n",
">[PipelineAI](https://pipeline.ai) allows you to run your ML models at scale in the cloud. It also provides API access to [several LLM models](https://pipeline.ai).\n",
"\n",
"This notebook goes over how to use Langchain with [PipelineAI](https://docs.pipeline.ai/docs)."
"This notebook goes over how to use Langchain with [PipelineAI](https://docs.pipeline.ai/docs).\n",
"\n",
"## PipelineAI example\n",
"\n",
"[This example shows how PipelineAI integrated with LangChain](https://docs.pipeline.ai/docs/langchain) and it is created by PipelineAI."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install pipeline-ai\n",
"## Setup\n",
"The `pipeline-ai` library is required to use the `PipelineAI` API, AKA `Pipeline Cloud`. Install `pipeline-ai` using `pip install pipeline-ai`."
]
},
@ -35,7 +37,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
"## Example\n",
"\n",
"### Imports"
]
},
{
@ -50,11 +54,10 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set the Environment API Key\n",
"### Set the Environment API Key\n",
"Make sure to get your API key from PipelineAI. Check out the [cloud quickstart guide](https://docs.pipeline.ai/docs/cloud-quickstart). You'll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models."
]
},
@ -68,7 +71,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -89,7 +91,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Prompt Template\n",
"### Create a Prompt Template\n",
"We will create a prompt template for Question and Answer."
]
},
@ -110,7 +112,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initiate the LLMChain"
"### Initiate the LLMChain"
]
},
{
@ -126,7 +128,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run the LLMChain\n",
"### Run the LLMChain\n",
"Provide a question and run the LLMChain."
]
},
@ -158,7 +160,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

File diff suppressed because one or more lines are too long

@ -1,19 +1,17 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Titan Takeoff\n",
"\n",
"TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. \n",
">`TitanML` helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. \n",
"\n",
"Our inference server, [Titan Takeoff](https://docs.titanml.co/docs/titan-takeoff/getting-started) enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more."
">Our inference server, [Titan Takeoff](https://docs.titanml.co/docs/titan-takeoff/getting-started) enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -40,12 +38,11 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Choose a Model\n",
"Iris Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the [supported models](https://docs.titanml.co/docs/titan-takeoff/supported-models) for more information. For information about using your own models, see the [custom models](https://docs.titanml.co/docs/titan-takeoff/Advanced/custom-models).\n",
"Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the [supported models](https://docs.titanml.co/docs/titan-takeoff/supported-models) for more information. For information about using your own models, see the [custom models](https://docs.titanml.co/docs/titan-takeoff/Advanced/custom-models).\n",
"\n",
"Going forward in this demo we will be using the falcon 7B instruct model. This is a good open source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.\n",
"\n",
@ -67,18 +64,37 @@
"source": [
"iris takeoff --model tiiuae/falcon-7b-instruct --device cpu\n",
"iris takeoff --model tiiuae/falcon-7b-instruct --device cuda # Nvidia GPU required\n",
"iris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)\n",
"```"
"iris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You will then be directed to a login page, where you will need to create an account to proceed.\n",
"After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration\n",
"After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration.\n",
"\n",
"To shutdown the server, run the following command. You will be presented with options on which Takeoff server to shut down, in case you have multiple running servers.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"iris takeoff --shutdown # shutdown the server"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inferencing your model\n",
"To access your LLM, use the TitanTakeoff LLM wrapper:"
]
@ -92,7 +108,7 @@
"from langchain.llms import TitanTakeoff\n",
"\n",
"llm = TitanTakeoff(\n",
" port=8000,\n",
" baseURL=\"http://localhost:8000\",\n",
" generate_max_length=128,\n",
" temperature=1.0\n",
")\n",
@ -103,11 +119,10 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"No parameters are needed by default, but a port can be specified and [generation parameters](https://docs.titanml.co/docs/titan-takeoff/Advanced/generation-parameters) can be supplied.\n",
"No parameters are needed by default, but a baseURL that points to your desired URL where Takeoff is running can be specified and [generation parameters](https://docs.titanml.co/docs/titan-takeoff/Advanced/generation-parameters) can be supplied.\n",
"\n",
"### Streaming\n",
"Streaming is also supported via the streaming flag:"
@ -122,7 +137,7 @@
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.callbacks.manager import CallbackManager\n",
"\n",
"llm = TitanTakeoff(port=8000, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True)\n",
"llm = TitanTakeoff(callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True)\n",
"\n",
"prompt = \"What is the capital of France?\"\n",
"\n",
@ -130,7 +145,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -159,11 +173,24 @@
}
],
"metadata": {
"language_info": {
"name": "python"
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"orig_nbformat": 4
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -28,7 +28,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 10,
"id": "93ce1811",
"metadata": {},
"outputs": [
@ -71,7 +71,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 11,
"id": "d15e3302",
"metadata": {},
"outputs": [],
@ -87,18 +87,15 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 12,
"id": "64fc465e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!', additional_kwargs={}, example=False),\n",
" AIMessage(content='whats up?', additional_kwargs={}, example=False)]"
]
"text/plain": "[HumanMessage(content='hi!', additional_kwargs={}, example=False),\n AIMessage(content='whats up?', additional_kwargs={}, example=False),\n HumanMessage(content='hi!', additional_kwargs={}, example=False),\n AIMessage(content='whats up?', additional_kwargs={}, example=False)]"
},
"execution_count": 3,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -119,7 +116,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 13,
"id": "225713c8",
"metadata": {},
"outputs": [],
@ -133,6 +130,81 @@
")"
]
},
{
"cell_type": "markdown",
"source": [
"## DynamoDBChatMessageHistory With Different Keys Composite Keys\n",
"The default key for DynamoDBChatMessageHistory is ```{\"SessionId\": self.session_id}```, but you can modify this to match your table design.\n",
"\n",
"### Primary Key Name\n",
"You may modify the primary key by passing in a primary_key_name value in the constructor, resulting in the following:\n",
"```{self.primary_key_name: self.session_id}```\n",
"\n",
"### Composite Keys\n",
"When using an existing DynamoDB table, you may need to modify the key structure from the default of to something including a Sort Key. To do this you may use the ```key``` parameter.\n",
"\n",
"Passing a value for key will override the primary_key parameter, and the resulting key structure will be the passed value.\n"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": 14,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0\n"
]
},
{
"data": {
"text/plain": "[HumanMessage(content='hello, composite dynamodb table!', additional_kwargs={}, example=False)]"
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory\n",
"\n",
"composite_table = dynamodb.create_table(\n",
" TableName=\"CompositeTable\",\n",
" KeySchema=[{\"AttributeName\": \"PK\", \"KeyType\": \"HASH\"}, {\"AttributeName\": \"SK\", \"KeyType\": \"RANGE\"}],\n",
" AttributeDefinitions=[{\"AttributeName\": \"PK\", \"AttributeType\": \"S\"}, {\"AttributeName\": \"SK\", \"AttributeType\": \"S\"}],\n",
" BillingMode=\"PAY_PER_REQUEST\",\n",
")\n",
"\n",
"# Wait until the table exists.\n",
"composite_table.meta.client.get_waiter(\"table_exists\").wait(TableName=\"CompositeTable\")\n",
"\n",
"# Print out some data about the table.\n",
"print(composite_table.item_count)\n",
"\n",
"my_key = {\n",
" \"PK\": \"session_id::0\",\n",
" \"SK\": \"langchain_history\",\n",
"}\n",
"\n",
"composite_key_history = DynamoDBChatMessageHistory(\n",
" table_name=\"CompositeTable\",\n",
" session_id=\"0\",\n",
" endpoint_url=\"http://localhost.localstack.cloud:4566\",\n",
" key=my_key,\n",
")\n",
"\n",
"composite_key_history.add_user_message(\"hello, composite dynamodb table!\")\n",
"\n",
"composite_key_history.messages"
],
"metadata": {
"collapsed": false
}
},
{
"attachments": {},
"cell_type": "markdown",
@ -144,7 +216,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 15,
"id": "f92d9499",
"metadata": {},
"outputs": [],
@ -165,7 +237,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 16,
"id": "1167eeba",
"metadata": {},
"outputs": [],
@ -184,10 +256,24 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 17,
"id": "fce085c5",
"metadata": {},
"outputs": [],
"outputs": [
{
"ename": "ValidationError",
"evalue": "1 validation error for ChatOpenAI\n__root__\n Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)",
"output_type": "error",
"traceback": [
"\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
"\u001B[0;31mValidationError\u001B[0m Traceback (most recent call last)",
"Cell \u001B[0;32mIn[17], line 1\u001B[0m\n\u001B[0;32m----> 1\u001B[0m llm \u001B[38;5;241m=\u001B[39m \u001B[43mChatOpenAI\u001B[49m\u001B[43m(\u001B[49m\u001B[43mtemperature\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;241;43m0\u001B[39;49m\u001B[43m)\u001B[49m\n\u001B[1;32m 2\u001B[0m agent_chain \u001B[38;5;241m=\u001B[39m initialize_agent(\n\u001B[1;32m 3\u001B[0m tools,\n\u001B[1;32m 4\u001B[0m llm,\n\u001B[0;32m (...)\u001B[0m\n\u001B[1;32m 7\u001B[0m memory\u001B[38;5;241m=\u001B[39mmemory,\n\u001B[1;32m 8\u001B[0m )\n",
"File \u001B[0;32m~/Documents/projects/langchain/libs/langchain/langchain/load/serializable.py:74\u001B[0m, in \u001B[0;36mSerializable.__init__\u001B[0;34m(self, **kwargs)\u001B[0m\n\u001B[1;32m 73\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m__init__\u001B[39m(\u001B[38;5;28mself\u001B[39m, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs: Any) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m---> 74\u001B[0m \u001B[38;5;28;43msuper\u001B[39;49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[38;5;21;43m__init__\u001B[39;49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 75\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_lc_kwargs \u001B[38;5;241m=\u001B[39m kwargs\n",
"File \u001B[0;32m~/Documents/projects/langchain/.venv/lib/python3.9/site-packages/pydantic/main.py:341\u001B[0m, in \u001B[0;36mpydantic.main.BaseModel.__init__\u001B[0;34m()\u001B[0m\n",
"\u001B[0;31mValidationError\u001B[0m: 1 validation error for ChatOpenAI\n__root__\n Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)"
]
}
],
"source": [
"llm = ChatOpenAI(temperature=0)\n",
"agent_chain = initialize_agent(\n",
@ -201,152 +287,42 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "952a3103",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Hello! How can I assist you today?\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello! How can I assist you today?'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"agent_chain.run(input=\"Hello!\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "54c4aaf4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"python_repl\",\n",
" \"action_input\": \"import requests\\nfrom bs4 import BeautifulSoup\\n\\nurl = 'https://en.wikipedia.org/wiki/Twitter'\\nresponse = requests.get(url)\\nsoup = BeautifulSoup(response.content, 'html.parser')\\nowner = soup.find('th', text='Owner').find_next_sibling('td').text.strip()\\nprint(owner)\"\n",
"}\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mX Corp. (2023present)Twitter, Inc. (20062023)\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"X Corp. (2023present)Twitter, Inc. (20062023)\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'X Corp. (2023present)Twitter, Inc. (20062023)'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"agent_chain.run(input=\"Who owns Twitter?\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "f9013118",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Hello Bob! How can I assist you today?\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello Bob! How can I assist you today?'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"agent_chain.run(input=\"My name is Bob.\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "405e5315",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Your name is Bob.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Your name is Bob.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"agent_chain.run(input=\"Who am I?\")"
"agent_chain.run(input=\"Who am I?\")\n"
]
}
],

@ -0,0 +1,235 @@
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# SQL Chat Message History\n",
"\n",
"This notebook goes over a **SQLChatMessageHistory** class that allows to store chat history in any database supported by SQLAlchemy.\n",
"\n",
"Please note that to use it with databases other than SQLite, you will need to install the corresponding database driver."
],
"metadata": {
"collapsed": false
},
"id": "f22eab3f84cbeb37"
},
{
"cell_type": "markdown",
"source": [
"### Basic Usage\n",
"\n",
"To use the storage you need to provide only 2 things:\n",
"\n",
"1. Session Id - a unique identifier of the session, like user name, email, chat id etc.\n",
"2. Connection string - a string that specifies the database connection. It will be passed to SQLAlchemy create_engine function."
],
"metadata": {
"collapsed": false
},
"id": "f8f2830ee9ca1e01"
},
{
"cell_type": "code",
"execution_count": 1,
"outputs": [],
"source": [
"from langchain.memory.chat_message_histories import SQLChatMessageHistory\n",
"\n",
"chat_message_history = SQLChatMessageHistory(\n",
"\tsession_id='test_session',\n",
"\tconnection_string='sqlite:///sqlite.db'\n",
")\n",
"\n",
"chat_message_history.add_user_message('Hello')\n",
"chat_message_history.add_ai_message('Hi')"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-08-28T10:04:38.077748Z",
"start_time": "2023-08-28T10:04:36.105894Z"
}
},
"id": "4576e914a866fb40"
},
{
"cell_type": "code",
"execution_count": 2,
"outputs": [
{
"data": {
"text/plain": "[HumanMessage(content='Hello', additional_kwargs={}, example=False),\n AIMessage(content='Hi', additional_kwargs={}, example=False)]"
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat_message_history.messages"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-08-28T10:04:38.929396Z",
"start_time": "2023-08-28T10:04:38.915727Z"
}
},
"id": "b476688cbb32ba90"
},
{
"cell_type": "markdown",
"source": [
"### Custom Storage Format\n",
"\n",
"By default, only the session id and message dictionary are stored in the table.\n",
"\n",
"However, sometimes you might want to store some additional information, like message date, author, language etc.\n",
"\n",
"To do that, you can create a custom message converter, by implementing **BaseMessageConverter** interface."
],
"metadata": {
"collapsed": false
},
"id": "2e5337719d5614fd"
},
{
"cell_type": "code",
"execution_count": 3,
"outputs": [],
"source": [
"from datetime import datetime\n",
"from langchain.schema import BaseMessage, HumanMessage, AIMessage, SystemMessage\n",
"from typing import Any\n",
"from sqlalchemy import Column, Integer, Text, DateTime\n",
"from sqlalchemy.orm import declarative_base\n",
"from langchain.memory.chat_message_histories.sql import BaseMessageConverter\n",
"\n",
"\n",
"Base = declarative_base()\n",
"\n",
"\n",
"class CustomMessage(Base):\n",
"\t__tablename__ = 'custom_message_store'\n",
"\n",
"\tid = Column(Integer, primary_key=True)\n",
"\tsession_id = Column(Text)\n",
"\ttype = Column(Text)\n",
"\tcontent = Column(Text)\n",
"\tcreated_at = Column(DateTime)\n",
"\tauthor_email = Column(Text)\n",
"\n",
"\n",
"class CustomMessageConverter(BaseMessageConverter):\n",
"\tdef __init__(self, author_email: str):\n",
"\t\tself.author_email = author_email\n",
"\t\n",
"\tdef from_sql_model(self, sql_message: Any) -> BaseMessage:\n",
"\t\tif sql_message.type == 'human':\n",
"\t\t\treturn HumanMessage(\n",
"\t\t\t\tcontent=sql_message.content,\n",
"\t\t\t)\n",
"\t\telif sql_message.type == 'ai':\n",
"\t\t\treturn AIMessage(\n",
"\t\t\t\tcontent=sql_message.content,\n",
"\t\t\t)\n",
"\t\telif sql_message.type == 'system':\n",
"\t\t\treturn SystemMessage(\n",
"\t\t\t\tcontent=sql_message.content,\n",
"\t\t\t)\n",
"\t\telse:\n",
"\t\t\traise ValueError(f'Unknown message type: {sql_message.type}')\n",
"\t\n",
"\tdef to_sql_model(self, message: BaseMessage, session_id: str) -> Any:\n",
"\t\tnow = datetime.now()\n",
"\t\treturn CustomMessage(\n",
"\t\t\tsession_id=session_id,\n",
"\t\t\ttype=message.type,\n",
"\t\t\tcontent=message.content,\n",
"\t\t\tcreated_at=now,\n",
"\t\t\tauthor_email=self.author_email\n",
"\t\t)\n",
"\t\n",
"\tdef get_sql_model_class(self) -> Any:\n",
"\t\treturn CustomMessage\n",
"\n",
"\n",
"chat_message_history = SQLChatMessageHistory(\n",
"\tsession_id='test_session',\n",
"\tconnection_string='sqlite:///sqlite.db',\n",
"\tcustom_message_converter=CustomMessageConverter(\n",
"\t\tauthor_email='test@example.com'\n",
" )\n",
")\n",
"\n",
"chat_message_history.add_user_message('Hello')\n",
"chat_message_history.add_ai_message('Hi')"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-08-28T10:04:41.510498Z",
"start_time": "2023-08-28T10:04:41.494912Z"
}
},
"id": "fdfde84c07d071bb"
},
{
"cell_type": "code",
"execution_count": 4,
"outputs": [
{
"data": {
"text/plain": "[HumanMessage(content='Hello', additional_kwargs={}, example=False),\n AIMessage(content='Hi', additional_kwargs={}, example=False)]"
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat_message_history.messages"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-08-28T10:04:43.497990Z",
"start_time": "2023-08-28T10:04:43.492517Z"
}
},
"id": "4a6a54d8a9e2856f"
},
{
"cell_type": "markdown",
"source": [
"You also might want to change the name of session_id column. In this case you'll need to specify `session_id_field_name` parameter."
],
"metadata": {
"collapsed": false
},
"id": "622aded629a1adeb"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -11,7 +11,7 @@ pip install python-arango
## Graph QA Chain
Connect your ArangoDB Database with a Chat Model to get insights on your data.
Connect your ArangoDB Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/more/graph/graph_arangodb_qa.html).

@ -20,7 +20,7 @@ from langchain.llms import NIBittensorLLM
It provides a unified interface for all models:
```python
llm = NIBittensorLLM(system_prompt="Your task is to provide consice and accurate response based on user prompt")
llm = NIBittensorLLM(system_prompt="Your task is to provide concise and accurate response based on user prompt")
print(llm('Write a fibonacci function in python with golder ratio'))
```

@ -4,12 +4,12 @@
Key features of the ddtrace integration for LangChain:
- Traces: Capture LangChain requests, parameters, prompt-completions, and help visualize LangChain operations.
- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and Chat Models).
- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and chat models).
- Logs: Store prompt completion data for each LangChain operation.
- Dashboard: Combine metrics, logs, and trace data into a single plane to monitor LangChain requests.
- Monitors: Provide alerts in response to spikes in LangChain request latency or error rate.
Note: The ddtrace LangChain integration currently provides tracing for LLMs, Chat Models, Text Embedding Models, Chains, and Vectorstores.
Note: The ddtrace LangChain integration currently provides tracing for LLMs, chat models, Text Embedding Models, Chains, and Vectorstores.
## Installation and Setup

@ -1,6 +1,9 @@
# MLflow AI Gateway
>`The MLflow AI Gateway` service is a powerful tool designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index.html) for more details.
>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/gateway/index.html) service is a powerful tool designed to streamline the usage and management of various large
> language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface
> that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests.
> See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index.html) for more details.
## Installation and Setup
@ -43,6 +46,16 @@ Start the Gateway server:
mlflow gateway start --config-path /path/to/config.yaml
```
## Example provided by `MLflow`
>The `mlflow.langchain` module provides an API for logging and loading `LangChain` models.
> This module exports multivariate LangChain models in the langchain flavor and univariate LangChain
> models in the pyfunc flavor.
See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain.html).
## Completions Example
```python

File diff suppressed because one or more lines are too long

@ -59,7 +59,7 @@
},
"outputs": [],
"source": [
"from langchain.retrievers import GoogleDriveRetriever"
"from langchain_googledrive.retrievers import GoogleDriveRetriever"
]
},
{

@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Eden AI is an AI consulting company that was founded to use its resources to empower people and create impactful products that use AI to improve the quality of life for individuals, businesses and societies at large."
"Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)"
]
},
{

@ -0,0 +1,118 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Airbyte Question Answering\n",
"This notebook shows how to do question answering over structured data, in this case using the `AirbyteStripeLoader`.\n",
"\n",
"Vectorstores often have a hard time answering questions that requires computing, grouping and filtering structured data so the high level idea is to use a `pandas` dataframe to help with these types of questions. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Load data from Stripe using Airbyte. user the `record_handler` paramater to return a JSON from the data loader."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import pandas as pd\n",
"\n",
"from langchain.document_loaders.airbyte import AirbyteStripeLoader\n",
"from langchain.chat_models.openai import ChatOpenAI\n",
"from langchain.agents import AgentType, create_pandas_dataframe_agent\n",
"\n",
"stream_name = \"customers\"\n",
"config = {\n",
" \"client_secret\": os.getenv(\"STRIPE_CLIENT_SECRET\"),\n",
" \"account_id\": os.getenv(\"STRIPE_ACCOUNT_D\"),\n",
" \"start_date\": \"2023-01-20T00:00:00Z\",\n",
"}\n",
"\n",
"def handle_record(record: dict, _id: str):\n",
" return record.data\n",
"\n",
"loader = AirbyteStripeLoader(\n",
" config=config,\n",
" record_handler=handle_record,\n",
" stream_name=stream_name,\n",
")\n",
"data = loader.load()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"2. Pass the data to `pandas` dataframe."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"3. Pass the dataframe `df` to the `create_pandas_dataframe_agent` and invoke\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent = create_pandas_dataframe_agent(\n",
" ChatOpenAI(temperature=0, model=\"gpt-4\"),\n",
" df,\n",
" verbose=True,\n",
" agent_type=AgentType.OPENAI_FUNCTIONS,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"4. Run the agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"output = agent.run(\"How many rows are there?\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -98,8 +98,8 @@
},
"outputs": [],
"source": [
"from langchain.utilities.google_drive import GoogleDriveAPIWrapper\n",
"from langchain.tools.google_drive.tool import GoogleDriveSearchTool\n",
"from langchain_googledrive.utilities.google_drive import GoogleDriveAPIWrapper\n",
"from langchain_googledrive.tools.google_drive.tool import GoogleDriveSearchTool\n",
"\n",
"# By default, search only in the filename.\n",
"tool = GoogleDriveSearchTool(\n",

@ -0,0 +1,513 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Eden AI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This Jupyter Notebook demonstrates how to use Eden AI tools with an Agent.\n",
"\n",
"Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/ )\n",
"\n",
"\n",
"By including an Edenai tool in the list of tools provided to an Agent, you can grant your Agent the ability to do multiple tasks, such as:\n",
"\n",
"- speech to text\n",
"- text to speech\n",
"- text explicit content detection \n",
"- image explicit content detection\n",
"- object detection\n",
"- OCR invoice parsing\n",
"- OCR ID parsing\n",
"\n",
"\n",
"In this example, we will go through the process of utilizing the Edenai tools to create an Agent that can perform some of the tasks listed above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---------------------------------------------------------------------------\n",
"Accessing the EDENAI's API requires an API key, \n",
"\n",
"which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settings\n",
"\n",
"Once we have a key we'll want to set it as the environment variable ``EDENAI_API_KEY`` or you can pass the key in directly via the edenai_api_key named parameter when initiating the EdenAI tools, e.g. ``EdenAiTextModerationTool(edenai_api_key=\"...\")``"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools.edenai import (\n",
" EdenAiSpeechToTextTool,\n",
" EdenAiTextToSpeechTool,\n",
" EdenAiExplicitImageTool,\n",
" EdenAiObjectDetectionTool,\n",
" EdenAiParsingIDTool,\n",
" EdenAiParsingInvoiceTool,\n",
" EdenAiTextModerationTool,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import EdenAI\n",
"from langchain.agents import initialize_agent, AgentType\n",
"\n",
"llm=EdenAI(feature=\"text\",provider=\"openai\", params={\"temperature\" : 0.2,\"max_tokens\" : 250})\n",
"\n",
"tools = [\n",
" EdenAiTextModerationTool(providers=[\"openai\"],language=\"en\"),\n",
" EdenAiObjectDetectionTool(providers=[\"google\",\"api4ai\"]),\n",
" EdenAiTextToSpeechTool(providers=[\"amazon\"],language=\"en\",voice=\"MALE\"),\n",
" EdenAiExplicitImageTool(providers=[\"amazon\",\"google\"]),\n",
" EdenAiSpeechToTextTool(providers=[\"amazon\"]),\n",
" EdenAiParsingIDTool(providers=[\"amazon\",\"klippa\"],language=\"en\"),\n",
" EdenAiParsingInvoiceTool(providers=[\"amazon\",\"google\"],language=\"en\"),\n",
"]\n",
"agent_chain = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" verbose=True,\n",
" return_intermediate_steps=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example with text"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to scan the text for explicit content and then convert it to speech\n",
"Action: edenai_explicit_content_detection_text\n",
"Action Input: 'i want to slap you'\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mnsfw_likelihood: 3\n",
"\"sexual\": 1\n",
"\"hate\": 1\n",
"\"harassment\": 1\n",
"\"self-harm\": 1\n",
"\"sexual/minors\": 1\n",
"\"hate/threatening\": 1\n",
"\"violence/graphic\": 1\n",
"\"self-harm/intent\": 1\n",
"\"self-harm/instructions\": 1\n",
"\"harassment/threatening\": 1\n",
"\"violence\": 3\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to convert the text to speech\n",
"Action: edenai_text_to_speech\n",
"Action Input: 'i want to slap you'\u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mhttps://d14uq1pz7dzsdq.cloudfront.net/0c825002-b4ef-4165-afa3-a140a5b25c82_.mp3?Expires=1693318351&Signature=V9vjgFe8pV5rnH-B2EUr8UshTEA3I0Xv1v0YwVEAq8w7G5pgex07dZ0M6h6fXusk7G3SW~sXs4IJxnD~DnIDp1XorvzMA2QVMJb8CD90EYvUWx9zfFa3tIegGapg~NC8wEGualccOehC~cSDhiQWrwAjDqPmq2olXnUVOfyl76pKNNR9Sm2xlljlrJcLCClBee2r5yCFEwFI-tnXX1lV2DGc5PNB66Lqrr0Fpe2trVJj2k8cLduIb8dbtqLPNIDCsV0N4QT10utZmhZcPpcSIBsdomw1Os1IjdG4nA8ZTIddAcLMCWJznttzl66vHPk26rjDpG5doMTTsPEz8ZKILQ__&Key-Pair-Id=K1F55BTI9AHGIK\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The text contains explicit content of violence with a likelihood of 3. The audio file of the text can be found at https://d14uq1pz7dzsdq.cloudfront.net/0c825002-b4ef-4165-afa3-a140a5b25c82_.mp3?Expires=1693318351&Signature=V9vjgFe8pV5rnH-B2EUr8UshTEA3I0Xv1v0YwVEAq8w7G5pgex07dZ0M6h6fXusk7G3SW~sXs4IJxnD~DnIDp1XorvzMA2QVMJb8CD90EYvUWx9zfFa3tIegGapg~NC8wEGualccOehC~cSDhiQWrwAjDqPmq2olXnUVOfyl76pKNNR9Sm2xlljlrJcLCClBee2r5yCFEwFI-tn\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"input_ = \"\"\"i have this text : 'i want to slap you' \n",
"first : i want to know if this text contains explicit content or not .\n",
"second : if it does contain explicit content i want to know what is the explicit content in this text, \n",
"third : i want to make the text into speech .\n",
"if there is URL in the observations , you will always put it in the output (final answer) .\n",
"\"\"\"\n",
"result = agent_chain(input_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"you can have more details of the execution by printing the result "
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The text contains explicit content of violence with a likelihood of 3. The audio file of the text can be found at https://d14uq1pz7dzsdq.cloudfront.net/0c825002-b4ef-4165-afa3-a140a5b25c82_.mp3?Expires=1693318351&Signature=V9vjgFe8pV5rnH-B2EUr8UshTEA3I0Xv1v0YwVEAq8w7G5pgex07dZ0M6h6fXusk7G3SW~sXs4IJxnD~DnIDp1XorvzMA2QVMJb8CD90EYvUWx9zfFa3tIegGapg~NC8wEGualccOehC~cSDhiQWrwAjDqPmq2olXnUVOfyl76pKNNR9Sm2xlljlrJcLCClBee2r5yCFEwFI-tn'"
]
},
"execution_count": 43,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['output']"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"collapsed": true
},
"outputs": [
{
"data": {
"text/plain": [
"{'input': \" i have this text : 'i want to slap you' \\n first : i want to know if this text contains explicit content or not .\\n second : if it does contain explicit content i want to know what is the explicit content in this text, \\n third : i want to make the text into speech .\\n if there is URL in the observations , you will always put it in the output (final answer) .\\n\\n \",\n",
" 'output': 'The text contains explicit content of violence with a likelihood of 3. The audio file of the text can be found at https://d14uq1pz7dzsdq.cloudfront.net/0c825002-b4ef-4165-afa3-a140a5b25c82_.mp3?Expires=1693318351&Signature=V9vjgFe8pV5rnH-B2EUr8UshTEA3I0Xv1v0YwVEAq8w7G5pgex07dZ0M6h6fXusk7G3SW~sXs4IJxnD~DnIDp1XorvzMA2QVMJb8CD90EYvUWx9zfFa3tIegGapg~NC8wEGualccOehC~cSDhiQWrwAjDqPmq2olXnUVOfyl76pKNNR9Sm2xlljlrJcLCClBee2r5yCFEwFI-tn',\n",
" 'intermediate_steps': [(AgentAction(tool='edenai_explicit_content_detection_text', tool_input=\"'i want to slap you'\", log=\" I need to scan the text for explicit content and then convert it to speech\\nAction: edenai_explicit_content_detection_text\\nAction Input: 'i want to slap you'\"),\n",
" 'nsfw_likelihood: 3\\n\"sexual\": 1\\n\"hate\": 1\\n\"harassment\": 1\\n\"self-harm\": 1\\n\"sexual/minors\": 1\\n\"hate/threatening\": 1\\n\"violence/graphic\": 1\\n\"self-harm/intent\": 1\\n\"self-harm/instructions\": 1\\n\"harassment/threatening\": 1\\n\"violence\": 3'),\n",
" (AgentAction(tool='edenai_text_to_speech', tool_input=\"'i want to slap you'\", log=\" I now need to convert the text to speech\\nAction: edenai_text_to_speech\\nAction Input: 'i want to slap you'\"),\n",
" 'https://d14uq1pz7dzsdq.cloudfront.net/0c825002-b4ef-4165-afa3-a140a5b25c82_.mp3?Expires=1693318351&Signature=V9vjgFe8pV5rnH-B2EUr8UshTEA3I0Xv1v0YwVEAq8w7G5pgex07dZ0M6h6fXusk7G3SW~sXs4IJxnD~DnIDp1XorvzMA2QVMJb8CD90EYvUWx9zfFa3tIegGapg~NC8wEGualccOehC~cSDhiQWrwAjDqPmq2olXnUVOfyl76pKNNR9Sm2xlljlrJcLCClBee2r5yCFEwFI-tnXX1lV2DGc5PNB66Lqrr0Fpe2trVJj2k8cLduIb8dbtqLPNIDCsV0N4QT10utZmhZcPpcSIBsdomw1Os1IjdG4nA8ZTIddAcLMCWJznttzl66vHPk26rjDpG5doMTTsPEz8ZKILQ__&Key-Pair-Id=K1F55BTI9AHGIK')]}"
]
},
"execution_count": 44,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example with images"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to determine if the image contains objects, if any of them are harmful, and then convert the text to speech.\n",
"Action: edenai_object_detection\n",
"Action Input: https://static.javatpoint.com/images/objects.jpg\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mApple - Confidence 0.94003654\n",
"Apple - Confidence 0.94003654\n",
"Apple - Confidence 0.94003654\n",
"Backpack - Confidence 0.7481894\n",
"Backpack - Confidence 0.7481894\n",
"Backpack - Confidence 0.7481894\n",
"Luggage & bags - Confidence 0.70691586\n",
"Luggage & bags - Confidence 0.70691586\n",
"Luggage & bags - Confidence 0.70691586\n",
"Container - Confidence 0.654727\n",
"Container - Confidence 0.654727\n",
"Container - Confidence 0.654727\n",
"Luggage & bags - Confidence 0.5871518\n",
"Luggage & bags - Confidence 0.5871518\n",
"Luggage & bags - Confidence 0.5871518\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to check if any of the objects are harmful.\n",
"Action: edenai_explicit_content_detection_text\n",
"Action Input: Apple, Backpack, Luggage & bags, Container\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mnsfw_likelihood: 2\n",
"\"sexually explicit\": 1\n",
"\"sexually suggestive\": 2\n",
"\"offensive\": 1\n",
"nsfw_likelihood: 1\n",
"\"sexual\": 1\n",
"\"hate\": 1\n",
"\"harassment\": 1\n",
"\"self-harm\": 1\n",
"\"sexual/minors\": 1\n",
"\"hate/threatening\": 1\n",
"\"violence/graphic\": 1\n",
"\"self-harm/intent\": 1\n",
"\"self-harm/instructions\": 1\n",
"\"harassment/threatening\": 1\n",
"\"violence\": 1\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m None of the objects are harmful.\n",
"Action: edenai_text_to_speech\n",
"Action Input: 'this item is safe'\u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mhttps://d14uq1pz7dzsdq.cloudfront.net/0546db8b-528e-4b63-9a69-d14d43ad1566_.mp3?Expires=1693316753&Signature=N0KZeK9I-1s7wTgiQOAwH7LFlltwyonSJcDnkdnr8JIJmbgSw6fo6RTxWl~VvD2Hg6igJqxtJFFWyrBmmx-f9wWLw3bZSnuMxkhTRqLX9aUA9N-vPJGiRZV5BFredaOm8pwfo8TcXhVjw08iSxv8GSuyZEIwZkiq4PzdiyVTnKKji6eytV0CrnHrTs~eXZkSnOdD2Fu0ECaKvFHlsF4IDLI8efRvituSk0X3ygdec4HQojl5vmBXJzi1TuhKWOX8UxeQle8pdjjqUPSJ9thTHpucdPy6UbhZOH0C9rbtLrCfvK5rzrT4D~gKy9woICzG34tKRxNxHYVVUPqx2BiInA__&Key-Pair-Id=K1F55BTI9AHGIK\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: The image contains objects such as Apple, Backpack, Luggage & bags, and Container. None of them are harmful. The text 'this item is safe' can be found in the audio file at https://d14uq1pz7dzsdq.cloudfront.net/0546db8b-528e-4b63-9a69-d14d43ad1566_.mp3?Expires=1693316753&Signature=N0KZeK9I-1s7wTgiQOAwH7LFlltwyonSJcDnkdnr8JIJmbgSw6fo6RTxWl~VvD2Hg6igJqxtJFFWyrBmmx-f9wWLw3bZSnuMxkhTRqLX9aUA9N-vPJGiRZV5BFredaOm8pwfo8TcXhVjw08iSxv8GSuyZEIwZkiq4PzdiyVTnKKji6eyt\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"input_ = \"\"\"i have this url of an image : \"https://static.javatpoint.com/images/objects.jpg\"\n",
"first : i want to know if the image contain objects .\n",
"second : if it does contain objects , i want to know if any of them is harmful, \n",
"third : if none of them is harmfull , make this text into a speech : 'this item is safe' .\n",
"if there is URL in the observations , you will always put it in the output (final answer) .\n",
"\"\"\"\n",
"result = agent_chain(input_)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"The image contains objects such as Apple, Backpack, Luggage & bags, and Container. None of them are harmful. The text 'this item is safe' can be found in the audio file at https://d14uq1pz7dzsdq.cloudfront.net/0546db8b-528e-4b63-9a69-d14d43ad1566_.mp3?Expires=1693316753&Signature=N0KZeK9I-1s7wTgiQOAwH7LFlltwyonSJcDnkdnr8JIJmbgSw6fo6RTxWl~VvD2Hg6igJqxtJFFWyrBmmx-f9wWLw3bZSnuMxkhTRqLX9aUA9N-vPJGiRZV5BFredaOm8pwfo8TcXhVjw08iSxv8GSuyZEIwZkiq4PzdiyVTnKKji6eyt\""
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['output']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"you can have more details of the execution by printing the result "
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': ' i have this url of an image : \"https://static.javatpoint.com/images/objects.jpg\"\\n first : i want to know if the image contain objects .\\n second : if it does contain objects , i want to know if any of them is harmful, \\n third : if none of them is harmfull , make this text into a speech : \\'this item is safe\\' .\\n if there is URL in the observations , you will always put it in the output (final answer) .\\n ',\n",
" 'output': \"The image contains objects such as Apple, Backpack, Luggage & bags, and Container. None of them are harmful. The text 'this item is safe' can be found in the audio file at https://d14uq1pz7dzsdq.cloudfront.net/0546db8b-528e-4b63-9a69-d14d43ad1566_.mp3?Expires=1693316753&Signature=N0KZeK9I-1s7wTgiQOAwH7LFlltwyonSJcDnkdnr8JIJmbgSw6fo6RTxWl~VvD2Hg6igJqxtJFFWyrBmmx-f9wWLw3bZSnuMxkhTRqLX9aUA9N-vPJGiRZV5BFredaOm8pwfo8TcXhVjw08iSxv8GSuyZEIwZkiq4PzdiyVTnKKji6eyt\",\n",
" 'intermediate_steps': [(AgentAction(tool='edenai_object_detection', tool_input='https://static.javatpoint.com/images/objects.jpg', log=' I need to determine if the image contains objects, if any of them are harmful, and then convert the text to speech.\\nAction: edenai_object_detection\\nAction Input: https://static.javatpoint.com/images/objects.jpg'),\n",
" 'Apple - Confidence 0.94003654\\nApple - Confidence 0.94003654\\nApple - Confidence 0.94003654\\nBackpack - Confidence 0.7481894\\nBackpack - Confidence 0.7481894\\nBackpack - Confidence 0.7481894\\nLuggage & bags - Confidence 0.70691586\\nLuggage & bags - Confidence 0.70691586\\nLuggage & bags - Confidence 0.70691586\\nContainer - Confidence 0.654727\\nContainer - Confidence 0.654727\\nContainer - Confidence 0.654727\\nLuggage & bags - Confidence 0.5871518\\nLuggage & bags - Confidence 0.5871518\\nLuggage & bags - Confidence 0.5871518'),\n",
" (AgentAction(tool='edenai_explicit_content_detection_text', tool_input='Apple, Backpack, Luggage & bags, Container', log=' I need to check if any of the objects are harmful.\\nAction: edenai_explicit_content_detection_text\\nAction Input: Apple, Backpack, Luggage & bags, Container'),\n",
" 'nsfw_likelihood: 2\\n\"sexually explicit\": 1\\n\"sexually suggestive\": 2\\n\"offensive\": 1\\nnsfw_likelihood: 1\\n\"sexual\": 1\\n\"hate\": 1\\n\"harassment\": 1\\n\"self-harm\": 1\\n\"sexual/minors\": 1\\n\"hate/threatening\": 1\\n\"violence/graphic\": 1\\n\"self-harm/intent\": 1\\n\"self-harm/instructions\": 1\\n\"harassment/threatening\": 1\\n\"violence\": 1'),\n",
" (AgentAction(tool='edenai_text_to_speech', tool_input=\"'this item is safe'\", log=\" None of the objects are harmful.\\nAction: edenai_text_to_speech\\nAction Input: 'this item is safe'\"),\n",
" 'https://d14uq1pz7dzsdq.cloudfront.net/0546db8b-528e-4b63-9a69-d14d43ad1566_.mp3?Expires=1693316753&Signature=N0KZeK9I-1s7wTgiQOAwH7LFlltwyonSJcDnkdnr8JIJmbgSw6fo6RTxWl~VvD2Hg6igJqxtJFFWyrBmmx-f9wWLw3bZSnuMxkhTRqLX9aUA9N-vPJGiRZV5BFredaOm8pwfo8TcXhVjw08iSxv8GSuyZEIwZkiq4PzdiyVTnKKji6eytV0CrnHrTs~eXZkSnOdD2Fu0ECaKvFHlsF4IDLI8efRvituSk0X3ygdec4HQojl5vmBXJzi1TuhKWOX8UxeQle8pdjjqUPSJ9thTHpucdPy6UbhZOH0C9rbtLrCfvK5rzrT4D~gKy9woICzG34tKRxNxHYVVUPqx2BiInA__&Key-Pair-Id=K1F55BTI9AHGIK')]}"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example with OCR images"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to extract the information from the ID and then convert it to text and then to speech\n",
"Action: edenai_identity_parsing\n",
"Action Input: \"https://www.citizencard.com/images/citizencard-uk-id-card-2023.jpg\"\u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mlast_name : \n",
" value : ANGELA\n",
"given_names : \n",
" value : GREENE\n",
"birth_place : \n",
"birth_date : \n",
" value : 2000-11-09\n",
"issuance_date : \n",
"expire_date : \n",
"document_id : \n",
"issuing_state : \n",
"address : \n",
"age : \n",
"country : \n",
"document_type : \n",
" value : DRIVER LICENSE FRONT\n",
"gender : \n",
"image_id : \n",
"image_signature : \n",
"mrz : \n",
"nationality : \u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to convert the information to text and then to speech\n",
"Action: edenai_text_to_speech\n",
"Action Input: \"Welcome Angela Greene!\"\u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mhttps://d14uq1pz7dzsdq.cloudfront.net/0c494819-0bbc-4433-bfa4-6e99bd9747ea_.mp3?Expires=1693316851&Signature=YcMoVQgPuIMEOuSpFuvhkFM8JoBMSoGMcZb7MVWdqw7JEf5~67q9dEI90o5todE5mYXB5zSYoib6rGrmfBl4Rn5~yqDwZ~Tmc24K75zpQZIEyt5~ZSnHuXy4IFWGmlIVuGYVGMGKxTGNeCRNUXDhT6TXGZlr4mwa79Ei1YT7KcNyc1dsTrYB96LphnsqOERx4X9J9XriSwxn70X8oUPFfQmLcitr-syDhiwd9Wdpg6J5yHAJjf657u7Z1lFTBMoXGBuw1VYmyno-3TAiPeUcVlQXPueJ-ymZXmwaITmGOfH7HipZngZBziofRAFdhMYbIjYhegu5jS7TxHwRuox32A__&Key-Pair-Id=K1F55BTI9AHGIK\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: https://d14uq1pz7dzsdq.cloudfront.net/0c494819-0bbc-4433-bfa4-6e99bd9747ea_.mp3?Expires=1693316851&Signature=YcMoVQgPuIMEOuSpFuvhkFM8JoBMSoGMcZb7MVWdqw7JEf5~67q9dEI90o5todE5mYXB5zSYoib6rGrmfBl4Rn5~yqDwZ~Tmc24K75zpQZIEyt5~ZSnHuXy4IFWGmlIVuGYVGMGKxTGNeCRNUXDhT6TXGZlr4mwa79Ei1YT7KcNyc1dsTrYB96LphnsqOERx4X9J9XriSwxn70X8oUPFfQmLcitr-syDhiwd9Wdpg6J5y\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"input_ = \"\"\"i have this url of an id: \"https://www.citizencard.com/images/citizencard-uk-id-card-2023.jpg\"\n",
"i want to extract the information in it.\n",
"create a text welcoming the person by his name and make it into speech .\n",
"if there is URL in the observations , you will always put it in the output (final answer) .\n",
"\"\"\"\n",
"result = agent_chain(input_)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'https://d14uq1pz7dzsdq.cloudfront.net/0c494819-0bbc-4433-bfa4-6e99bd9747ea_.mp3?Expires=1693316851&Signature=YcMoVQgPuIMEOuSpFuvhkFM8JoBMSoGMcZb7MVWdqw7JEf5~67q9dEI90o5todE5mYXB5zSYoib6rGrmfBl4Rn5~yqDwZ~Tmc24K75zpQZIEyt5~ZSnHuXy4IFWGmlIVuGYVGMGKxTGNeCRNUXDhT6TXGZlr4mwa79Ei1YT7KcNyc1dsTrYB96LphnsqOERx4X9J9XriSwxn70X8oUPFfQmLcitr-syDhiwd9Wdpg6J5y'"
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['output']"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to extract information from the invoice document\n",
"Action: edenai_invoice_parsing\n",
"Action Input: \"https://app.edenai.run/assets/img/data_1.72e3bdcc.png\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mcustomer_information : \n",
" customer_name : Damita J Goldsmith\n",
" customer_address : 201 Stan Fey Dr,Upper Marlboro, MD 20774\n",
" customer_shipping_address : 201 Stan Fey Drive,Upper Marlboro\n",
"merchant_information : \n",
" merchant_name : SNG Engineering Inc\n",
" merchant_address : 344 Main St #200 Gaithersburg, MD 20878 USA\n",
" merchant_phone : +1 301 548 0055\n",
"invoice_number : 014-03\n",
"taxes : \n",
"payment_term : on receipt of service\n",
"date : 2003-01-20\n",
"po_number : \n",
"locale : \n",
"bank_informations : \n",
"item_lines : \n",
" description : Field inspection of construction on 1/19/2003 deficiencies in house,construction, Garage drive way & legal support to Attorney to\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the answer to the questions\n",
"Final Answer: The customer is Damita J Goldsmith and the company name is SNG Engineering Inc.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"input_ = \"\"\"i have this url of an invoice document: \"https://app.edenai.run/assets/img/data_1.72e3bdcc.png\"\n",
"i want to extract the information in it.\n",
"and answer these questions :\n",
"who is the customer ?\n",
"what is the company name ? \n",
"\"\"\"\n",
"result=agent_chain()"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The customer is Damita J Goldsmith and the company name is SNG Engineering Inc.'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['output']"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -0,0 +1,215 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Drive\n",
"\n",
"This notebook walks through connecting a LangChain to the `Google Drive API`.\n",
"\n",
"## Prerequisites\n",
"\n",
"1. Create a Google Cloud project or use an existing project\n",
"1. Enable the [Google Drive API](https://console.cloud.google.com/flows/enableapi?apiid=drive.googleapis.com)\n",
"1. [Authorize credentials for desktop app](https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application)\n",
"1. `pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib`\n",
"\n",
"## Instructions for retrieving your Google Docs data\n",
"By default, the `GoogleDriveTools` and `GoogleDriveWrapper` expects the `credentials.json` file to be `~/.credentials/credentials.json`, but this is configurable using the `GOOGLE_ACCOUNT_FILE` environment variable. \n",
"The location of `token.json` use the same directory (or use the parameter `token_path`). Note that `token.json` will be created automatically the first time you use the tool.\n",
"\n",
"`GoogleDriveSearchTool` can retrieve a selection of files with some requests. \n",
"\n",
"By default, If you use a `folder_id`, all the files inside this folder can be retrieved to `Document`, if the name match the query.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can obtain your folder and document id from the URL:\n",
"* Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is `\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\"`\n",
"* Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is `\"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw\"`\n",
"\n",
"The special value `root` is for your personal home."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"folder_id=\"root\"\n",
"#folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default, all files with these mime-type can be converted to `Document`.\n",
"- text/text\n",
"- text/plain\n",
"- text/html\n",
"- text/csv\n",
"- text/markdown\n",
"- image/png\n",
"- image/jpeg\n",
"- application/epub+zip\n",
"- application/pdf\n",
"- application/rtf\n",
"- application/vnd.google-apps.document (GDoc)\n",
"- application/vnd.google-apps.presentation (GSlide)\n",
"- application/vnd.google-apps.spreadsheet (GSheet)\n",
"- application/vnd.google.colaboratory (Notebook colab)\n",
"- application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)\n",
"- application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)\n",
"\n",
"It's possible to update or customize this. See the documentation of `GoogleDriveAPIWrapper`.\n",
"\n",
"But, the corresponding packages must installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install unstructured"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.utilities.google_drive import GoogleDriveAPIWrapper\n",
"from langchain.tools.google_drive.tool import GoogleDriveSearchTool\n",
"\n",
"# By default, search only in the filename.\n",
"tool = GoogleDriveSearchTool(\n",
" api_wrapper=GoogleDriveAPIWrapper(\n",
" folder_id=folder_id,\n",
" num_results=2,\n",
" template=\"gdrive-query-in-folder\", # Search in the body of documents\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tool.run(\"machine learning\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tool.description"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"tools = load_tools([\"google-drive-search\"],\n",
" folder_id=folder_id,\n",
" template=\"gdrive-query-in-folder\",\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an Agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain import OpenAI\n",
"from langchain.agents import initialize_agent, AgentType\n",
"llm = OpenAI(temperature=0)\n",
"agent = initialize_agent(\n",
" tools=tools,\n",
" llm=llm,\n",
" agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent.run(\n",
" \"Search in google drive, who is 'Yann LeCun' ?\"\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "e6860c2d",
"metadata": {
"pycharm": {
@ -29,7 +29,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "dadbcfcd",
"metadata": {},
"outputs": [],
@ -119,7 +119,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 5,
"id": "e1c39a0f",
"metadata": {},
"outputs": [],
@ -129,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 6,
"id": "900dd6cb",
"metadata": {},
"outputs": [],
@ -141,7 +141,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 7,
"id": "342ee8ec",
"metadata": {},
"outputs": [
@ -155,9 +155,9 @@
"\u001b[32;1m\u001b[1;3m I need to find out what the current weather is in Pomfret.\n",
"Action: Search\n",
"Action Input: \"weather in Pomfret\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mPartly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 ...\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m{'type': 'weather_result', 'temperature': '69', 'unit': 'Fahrenheit', 'precipitation': '2%', 'humidity': '90%', 'wind': '1 mph', 'location': 'Pomfret, CT', 'date': 'Sunday 9:00 PM', 'weather': 'Clear'}\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the current weather in Pomfret.\n",
"Final Answer: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.\u001b[0m\n",
"Final Answer: The current weather in Pomfret is 69 degrees Fahrenheit, 2% precipitation, 90% humidity, and 1 mph wind. It is currently clear.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@ -165,10 +165,10 @@
{
"data": {
"text/plain": [
"'Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.'"
"'The current weather in Pomfret is 69 degrees Fahrenheit, 2% precipitation, 90% humidity, and 1 mph wind. It is currently clear.'"
]
},
"execution_count": 11,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@ -351,7 +351,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.11"
"version": "3.11.3"
},
"vscode": {
"interpreter": {

@ -0,0 +1,248 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "71406a01-e5e5-4fe0-a1fa-49216871779e",
"metadata": {},
"source": [
"# Yahoo Finance News\n",
"\n",
"This notebook goes over how to use the `yahoo_finance_news` tool with an agent. \n",
"\n",
"\n",
"## Setting up\n",
"\n",
"First, you need to install `yfinance` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38717a85-2c3c-4452-a1c7-1ed4dea3da86",
"metadata": {},
"outputs": [],
"source": [
"!pip install yfinance"
]
},
{
"cell_type": "markdown",
"id": "4527b5f9-b496-45d8-8147-7a4ebb89734b",
"metadata": {},
"source": [
"## Example with Chain"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d137dd6c-d3d3-4813-af65-59eaaa6b3d76",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"...\""
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "fc42f766-9ce6-4ba3-be6c-5ba8a345b0d3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.agents import initialize_agent, AgentType\n",
"from langchain.tools.yahoo_finance_news import YahooFinanceNewsTool\n",
" \n",
"\n",
"llm = ChatOpenAI(temperature=0.0)\n",
"tools = [YahooFinanceNewsTool()]\n",
"agent_chain = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "3d1614b4-508e-4689-84b1-2a387f80aeb1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI should check the latest financial news about Microsoft stocks.\n",
"Action: yahoo_finance_news\n",
"Action Input: MSFT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mMicrosoft (MSFT) Gains But Lags Market: What You Should Know\n",
"In the latest trading session, Microsoft (MSFT) closed at $328.79, marking a +0.12% move from the previous day.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI have the latest information on Microsoft stocks.\n",
"Final Answer: Microsoft (MSFT) closed at $328.79, with a +0.12% move from the previous day.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Microsoft (MSFT) closed at $328.79, with a +0.12% move from the previous day.'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(\n",
" \"What happens today with Microsoft stocks?\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "c899b64d-86a5-452c-b576-e94f485c27ea",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI should compare the current sentiment of Microsoft and Nvidia.\n",
"Action: yahoo_finance_news\n",
"Action Input: MSFT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mMicrosoft (MSFT) Gains But Lags Market: What You Should Know\n",
"In the latest trading session, Microsoft (MSFT) closed at $328.79, marking a +0.12% move from the previous day.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to find the current sentiment of Nvidia as well.\n",
"Action: yahoo_finance_news\n",
"Action Input: NVDA\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the current sentiment of both Microsoft and Nvidia.\n",
"Final Answer: I cannot compare the sentiment of Microsoft and Nvidia as I only have information about Microsoft.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'I cannot compare the sentiment of Microsoft and Nvidia as I only have information about Microsoft.'"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(\n",
" \"How does Microsoft feels today comparing with Nvidia?\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "366b57dc-5292-4011-9b99-7b7c86237def",
"metadata": {},
"source": [
"# How YahooFinanceNewsTool works?"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "7879b79c-b5c7-4a5d-8338-edda53ff41a6",
"metadata": {},
"outputs": [],
"source": [
"tool = YahooFinanceNewsTool()"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "ac989456-33bc-4478-874e-98b9cb24d113",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'No news found for company that searched with NVDA ticker.'"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool.run(\"NVDA\")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "46c697aa-102e-48d4-9834-081671aad40a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Top Research Reports for Apple, Broadcom & Caterpillar\n",
"Today's Research Daily features new research reports on 16 major stocks, including Apple Inc. (AAPL), Broadcom Inc. (AVGO) and Caterpillar Inc. (CAT).\n",
"\n",
"Apple Stock on Pace for Worst Month of the Year\n",
"Apple (AAPL) shares are on pace for their worst month of the year, according to Dow Jones Market Data. The stock is down 4.8% so far in August, putting it on pace for its worst month since December 2022, when it fell 12%.\n"
]
}
],
"source": [
"res = tool.run(\"AAPL\")\n",
"print(res)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fa415bf7-dd92-43f5-89aa-de7b7deaaf2b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -5,9 +5,13 @@
"id": "acb64858",
"metadata": {},
"source": [
"# YouTube (youtube_search)\n",
"# YouTube\n",
"\n",
"This notebook shows how to use a tool to search `YouTube` using `youtube_search` package.\n",
">[YouTube Search](https://github.com/joetats/youtube_search) package searches `YouTube` videos avoiding using their heavily rate-limited API.\n",
">\n",
">It uses the form on the `YouTube` homepage and scrapes the resulting page.\n",
"\n",
"This notebook shows how to use a tool to search YouTube.\n",
"\n",
"Adapted from [https://github.com/venuv/langchain_yt_tools](https://github.com/venuv/langchain_yt_tools)"
]

@ -167,7 +167,7 @@
"Tables necessary to determine the places of the planets are not less\r\n",
"necessary than those for the sun, moon, and stars. Some notion of the\r\n",
"number and complexity of these tables may be formed, when we state that\r\n",
"the positions of the two principal planets, (and these the most\r\n",
"the positions of the two principal planets, (and these are the most\r\n",
"necessary for the navigator,) Jupiter and Saturn, require each not less\r\n",
"than one hundred and sixteen tables. Yet it is not only necessary to\r\n",
"predict the position of these bodies, but it is likewise expedient to -> 0.8998482592744614 \n",

@ -7,11 +7,13 @@
"source": [
"# OpenAI Multi Functions Agent\n",
"\n",
"This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model\n",
"This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.\n",
"\n",
"Install openai,google-search-results packages which are required as the langchain packages call them internally\n",
"Install `openai`, `google-search-results` packages which are required as the LangChain packages call them internally.\n",
"\n",
">pip install openai google-search-results\n"
"```bash\n",
"pip install openai google-search-results\n",
"```\n"
]
},
{
@ -32,10 +34,10 @@
"id": "86198d9c",
"metadata": {},
"source": [
"The agent is given ability to perform search functionalities with the respective tool\n",
"The agent is given the ability to perform search functionalities with the respective tool\n",
"\n",
"SerpAPIWrapper:\n",
">This initializes the SerpAPIWrapper for search functionality (search).\n"
"`SerpAPIWrapper`:\n",
">This initializes the `SerpAPIWrapper` for search functionality (search).\n"
]
},
{
@ -228,7 +230,7 @@
"source": [
"## Configuring max iteration behavior\n",
"\n",
"To make sure that our agent doesn't get stuck in excessively long loops, we can set max_iterations. We can also set an early stopping method, which will determine our agent's behavior once the number of max iterations is hit. By default, the early stopping uses method `force` which just returns that constant string. Alternatively, you could specify method `generate` which then does one FINAL pass through the LLM to generate an output."
"To make sure that our agent doesn't get stuck in excessively long loops, we can set `max_iterations`. We can also set an early stopping method, which will determine our agent's behavior once the number of max iterations is hit. By default, the early stopping uses method `force` which just returns that constant string. Alternatively, you could specify method `generate` which then does one FINAL pass through the LLM to generate an output."
]
},
{
@ -428,7 +430,7 @@
"id": "067a8d3e",
"metadata": {},
"source": [
"Notice that we never get around to looking up the weather the day before yesterday, due to hitting our max_iterations limit."
"Notice that we never get around to looking up the weather the day before yesterday, due to hitting our `max_iterations` limit."
]
},
{

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save