Batch update of alt text and title attributes for images in md/mdx files across repo (#15357)

**Description:** Batch update of alt text and title attributes for
images in `md` & `mdx` files across the repo using
[alttexter](https://github.com/jonathanalgar/alttexter)/[alttexter-ghclient](https://github.com/jonathanalgar/alttexter-ghclient)
(built using LangChain/LangSmith).

**Limitation:** cannot update `ipynb` files because of [this
issue](https://github.com/langchain-ai/langchain/pull/15357#issuecomment-1885037250).
Can revisit when Docusaurus is bumped to v3.

I checked all the generated alt texts and titles and didn't find any
technical inaccuracies. That's not to say they're _perfect_, but a lot
better than what's there currently.


[Deployed](https://langchain-819yf1tbk-langchain.vercel.app/docs/modules/model_io/)
image example:


![chrome_yZQ7BF2GTj](https://github.com/langchain-ai/langchain/assets/93204286/43a9a4d4-70fd-41c4-8978-b6240ff63ffa)

You can see LangSmith traces for all the calls out to the LLM in the PRs
merged into this one:

* https://github.com/jonathanalgar/langchain/pull/6
* https://github.com/jonathanalgar/langchain/pull/4
* https://github.com/jonathanalgar/langchain/pull/3

I didn't add the following files to the PR as the images already have OK
alt texts:

*
27dca2d92f/docs/docs/integrations/providers/argilla.mdx (L3)
*
27dca2d92f/docs/docs/integrations/providers/apify.mdx (L11)

---------

Co-authored-by: github-actions <github-actions@github.com>
pull/15971/head^2
Jonathan Algar 4 months ago committed by GitHub
parent efe6cfafe2
commit a74f3a4979
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -49,7 +49,7 @@ The LangChain libraries themselves are made up of several different packages.
- **[`langchain-community`](libs/community)**: Third party integrations.
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
![LangChain Stack](docs/static/img/langchain_stack.png)
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/img/langchain_stack.png "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Retrieval augmented generation**

@ -14,7 +14,7 @@ This framework consists of several parts.
- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
![LangChain Diagram](/svg/langchain_stack.svg)
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/svg/langchain_stack.svg "LangChain Framework Overview")
Together, these products simplify the entire application lifecycle:
- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.

@ -12,7 +12,7 @@ Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [Wand
For anyone building production-grade LLM applications, we highly recommend using a platform like this.
![LangSmith run](../../static/img/run_details.png)
![Screenshot of the LangSmith debugging interface showing an AgentExecutor run with input and output details, and a run tree visualization.](../../static/img/run_details.png "LangSmith Debugging Interface")
## `set_debug` and `set_verbose`

@ -6,7 +6,7 @@ This page covers how to use the [Remembrall](https://remembrall.dev) ecosystem w
Remembrall gives your language model long-term memory, retrieval augmented generation, and complete observability with just a few lines of code.
![Remembrall Dashboard](/img/RemembrallDashboard.png)
![Screenshot of the Remembrall dashboard showing request statistics and model interactions.](/img/RemembrallDashboard.png "Remembrall Dashboard Interface")
It works as a light-weight proxy on top of your OpenAI calls and simply augments the context of the chat calls at runtime with relevant facts that have been collected.

@ -150,4 +150,4 @@ This command will initiate the execution of the `langchain_llm` task on the Flyt
The metrics will be displayed on the Flyte UI as follows:
![LangChain LLM](https://ik.imagekit.io/c8zl7irwkdda/Screenshot_2023-06-20_at_1.23.29_PM_MZYeG0dKa.png?updatedAt=1687247642993)
![Screenshot of Flyte Deck showing LangChain metrics and a dependency tree visualization.](https://ik.imagekit.io/c8zl7irwkdda/Screenshot_2023-06-20_at_1.23.29_PM_MZYeG0dKa.png?updatedAt=1687247642993 "Flyte Deck Metrics Display")

@ -6,7 +6,7 @@ This page covers how to use the [Helicone](https://helicone.ai) ecosystem within
Helicone is an [open-source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
![Helicone](/img/HeliconeDashboard.png)
![Screenshot of the Helicone dashboard showing average requests per day, response time, tokens per response, total cost, and a graph of requests over time.](/img/HeliconeDashboard.png "Helicone Dashboard")
## Quick start
@ -18,7 +18,7 @@ export OPENAI_API_BASE="https://oai.hconeai.com/v1"
Now head over to [helicone.ai](https://helicone.ai/onboarding?step=2) to create your account, and add your OpenAI API key within our dashboard to view your logs.
![Helicone](/img/HeliconeKeys.png)
![Interface for entering and managing OpenAI API keys in the Helicone dashboard.](/img/HeliconeKeys.png "Helicone API Key Input")
## How to enable Helicone caching

@ -6,7 +6,7 @@ This page covers how to use [Metal](https://getmetal.io) within LangChain.
Metal is a managed retrieval & memory platform built for production. Easily index your data into `Metal` and run semantic search and retrieval on it.
![Metal](/img/MetalDash.png)
![Screenshot of the Metal dashboard showing the Browse Index feature with sample data.](/img/MetalDash.png "Metal Dashboard Interface")
## Quick start

@ -14,7 +14,7 @@ This section of the documentation covers everything related to the *retrieval* s
Although this sounds simple, it can be subtly complex.
This encompasses several key modules.
![data_connection_diagram](/img/data_connection.jpg)
![Illustrative diagram showing the data connection process with steps: Source, Load, Transform, Embed, Store, and Retrieve.](/img/data_connection.jpg "Data Connection Process Diagram")
**[Document loaders](/docs/modules/data_connection/document_loaders/)**

@ -12,7 +12,7 @@ vectors, and then at query time to embed the unstructured query and retrieve the
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
for you.
![vector store diagram](/img/vector_stores.jpg)
![Diagram illustrating the process of vector stores: 1. Load source data, 2. Query vector store, 3. Retrieve 'most similar' results.](/img/vector_stores.jpg "Vector Store Process Diagram")
## Get started

@ -36,7 +36,7 @@ A chain will interact with its memory system twice in a given run.
1. AFTER receiving the initial user inputs but BEFORE executing the core logic, a chain will READ from its memory system and augment the user inputs.
2. AFTER executing the core logic but BEFORE returning the answer, a chain will WRITE the inputs and outputs of the current run to memory, so that they can be referred to in future runs.
![memory-diagram](/img/memory_diagram.png)
![Diagram illustrating the READ and WRITE operations of a memory system in a conversational interface.](/img/memory_diagram.png "Memory System Diagram")
## Building memory into a system

@ -9,7 +9,7 @@ sidebar_class_name: hidden
The core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.
![model_io_diagram](/img/model_io.jpg)
![Flowchart illustrating the Model I/O process with steps Format, Predict, and Parse, showing the transformation from input variables to structured output.](/img/model_io.jpg "Model Input/Output Process Diagram")
## [Conceptual Guide](/docs/modules/model_io/concepts)

@ -15,7 +15,7 @@ LangChain Community contains third-party integrations that implement the base in
For full documentation see the [API reference](https://api.python.langchain.com/en/stable/community_api_reference.html).
![LangChain Stack](../../docs/static/img/langchain_stack.png)
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png "LangChain Framework Overview")
## 📕 Releases & Versioning

@ -32,7 +32,7 @@ Rather than having to write multiple implementations for all of those, LCEL allo
For more check out the [LCEL docs](https://python.langchain.com/docs/expression_language/).
![LangChain Stack](../../docs/static/img/langchain_stack.png)
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png "LangChain Framework Overview")
## 📕 Releases & Versioning

@ -102,11 +102,11 @@ langchain serve
This now gives a fully deployed LangServe application.
For example, you get a playground out-of-the-box at [http://127.0.0.1:8000/pirate-speak/playground/](http://127.0.0.1:8000/pirate-speak/playground/):
![playground.png](docs/playground.png)
![Screenshot of the LangServe Playground interface with input and output fields demonstrating pirate speak conversion.](docs/playground.png "LangServe Playground Interface")
Access API documentation at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
![docs.png](docs/docs.png)
![Screenshot of the API documentation interface showing available endpoints for the pirate-speak application.](docs/docs.png "API Documentation Interface")
Use the LangServe python or js SDK to interact with the API as if it were a regular [Runnable](https://python.langchain.com/docs/expression_language/).

@ -6,7 +6,7 @@ LangChain template that uses [Anthropic's Claude on Amazon Bedrock](https://aws.
> I am the Fred Astaire of Chatbots! 🕺
'![JCVD](https://media.tenor.com/CVp9l7g3axwAAAAj/jean-claude-van-damme-jcvd.gif)
'![Animated GIF of Jean-Claude Van Damme dancing.](https://media.tenor.com/CVp9l7g3axwAAAAj/jean-claude-van-damme-jcvd.gif "Jean-Claude Van Damme Dancing")
## Environment Setup
@ -78,4 +78,4 @@ We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/d
We can also access the playground at [http://127.0.0.1:8000/bedrock-jcvd/playground](http://127.0.0.1:8000/bedrock-jcvd/playground)
![JCVD Playground](jcvd_langserve.png)
![Screenshot of the LangServe Playground interface with an example input and output demonstrating a Jean-Claude Van Damme voice imitation.](jcvd_langserve.png "JCVD Playground")

@ -9,11 +9,11 @@ Taking [Chat Langchain](https://chat.langchain.com/) as a case study, only about
This template helps solve this "feedback scarcity" problem. Below is an example invocation of this chat bot:
[![Chat Interaction](./static/chat_interaction.png)](https://smith.langchain.com/public/3378daea-133c-4fe8-b4da-0a3044c5dbe8/r?runtab=1)
[![Screenshot of a chat bot interaction where the AI responds in a pirate accent to a user asking where their keys are.](./static/chat_interaction.png "Chat Bot Interaction Example")](https://smith.langchain.com/public/3378daea-133c-4fe8-b4da-0a3044c5dbe8/r?runtab=1)
When the user responds to this ([link](https://smith.langchain.com/public/a7e2df54-4194-455d-9978-cecd8be0df1e/r)), the response evaluator is invoked, resulting in the following evaluationrun:
[![Evaluator Run](./static/evaluator.png)](https://smith.langchain.com/public/534184ee-db8f-4831-a386-3f578145114c/r)
[![Screenshot of an evaluator run showing the AI's response effectiveness score based on the user's follow-up message expressing frustration.](./static/evaluator.png "Chat Bot Evaluator Run")](https://smith.langchain.com/public/534184ee-db8f-4831-a386-3f578145114c/r)
As shown, the evaluator sees that the user is increasingly frustrated, indicating that the prior response was not effective

@ -38,4 +38,4 @@ langchain template serve
This will spin up endpoints, documentation, and playground for this chain.
For example, you can access the playground at [http://127.0.0.1:8000/playground/](http://127.0.0.1:8000/playground/)
![playground.png](playground.png)
![Screenshot of the LangServe Playground web interface with input and output fields.](playground.png "LangServe Playground Interface")

@ -99,15 +99,15 @@ We will first follow the standard MongoDB Atlas setup instructions [here](https:
This can be done by going to the deployement overview page and connecting to you database
![connect.png](_images/connect.png)
![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png "MongoDB Atlas Connect Button")
We then look at the drivers available
![driver.png](_images/driver.png)
![Screenshot showing the MongoDB Atlas drivers section for connecting to the database.](_images/driver.png "MongoDB Atlas Drivers Section")
Among which we will see our URI listed
![uri.png](_images/uri.png)
![Screenshot displaying the MongoDB Atlas URI in the connection instructions.](_images/uri.png "MongoDB Atlas URI Display")
Let's then set that as an environment variable locally:

@ -9,7 +9,7 @@ The package utilizes a full-text index for efficient mapping of text values to d
In the provided example, the full-text index is used to map names of people and movies from the user's query to corresponding database entries.
![Workflow diagram](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-ft/static/workflow.png)
![Workflow diagram showing the process from a user asking a question to generating an answer using the Neo4j knowledge graph and full-text index.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-ft/static/workflow.png "Neo4j Cypher Workflow Diagram")
## Environment Setup

@ -7,7 +7,7 @@ Additionally, it features a conversational memory module that stores the dialogu
The conversation memory is uniquely maintained for each user session, ensuring personalized interactions.
To facilitate this, please supply both the `user_id` and `session_id` when using the conversation chain.
![Workflow diagram](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-memory/static/workflow.png)
![Workflow diagram illustrating the process of a user asking a question, generating a Cypher query, retrieving conversational history, executing the query on a Neo4j database, generating an answer, and storing conversational memory.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-memory/static/workflow.png "Neo4j Cypher Memory Workflow Diagram")
## Environment Setup

@ -5,7 +5,7 @@ This template allows you to interact with a Neo4j graph database in natural lang
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.
[![Workflow diagram](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher/static/workflow.png)](https://medium.com/neo4j/langchain-cypher-search-tips-tricks-f7c9e9abca4d)
[![Diagram showing the workflow of a user asking a question, which is processed by a Cypher generating chain, resulting in a Cypher query to the Neo4j Knowledge Graph, and then an answer generating chain that provides a generated answer based on the information from the graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher/static/workflow.png "Neo4j Cypher Workflow Diagram")](https://medium.com/neo4j/langchain-cypher-search-tips-tricks-f7c9e9abca4d)
## Environment Setup

@ -3,7 +3,7 @@
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling.
The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph databas based on the user's intent.
![Workflow diagram](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-semantic-layer/static/workflow.png)
![Diagram illustrating the workflow of the Neo4j semantic layer with an agent interacting with tools like Information, Recommendation, and Memory, connected to a knowledge graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-semantic-layer/static/workflow.png "Neo4j Semantic Layer Workflow Diagram")
## Tools

@ -4,7 +4,7 @@ Ever struggled to reach inbox zero?
Using this template, you can create and customize your very own AI assistant to manage your Gmail account. Using the default Gmail tools, it can read, search through, and draft emails to respond on your behalf. It also has access to a Tavily search engine so it can search for relevant information about any topics or people in the email thread before writing, ensuring the drafts include all the relevant information needed to sound well-informed.
![Gmail Agent Playground](./static/gmail-agent-playground.gif)
![Animated GIF showing the interface of the Gmail Agent Playground with a cursor interacting with the input field.](./static/gmail-agent-playground.gif "Gmail Agent Playground Interface")
## The details

@ -2,7 +2,7 @@
This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s [Dense X Retrieval: What Retrieval Granularity Should We Use?](https://arxiv.org/abs/2312.06648). The prompt, which you can [try out on the hub](https://smith.langchain.com/hub/wfh/proposal-indexing), directs an LLM to generate de-contextualized "propositions" which can be vectorized to increase the retrieval accuracy. You can see the full definition in `proposal_chain.py`.
![Retriever Diagram](https://github.com/langchain-ai/langchain/raw/master/templates/propositional-retrieval/_images/retriever_diagram.png)
![Diagram illustrating the multi-vector indexing strategy for information retrieval, showing the process from Wikipedia data through a Proposition-izer to FactoidWiki, and the retrieval of information units for a QA model.](https://github.com/langchain-ai/langchain/raw/master/templates/propositional-retrieval/_images/retriever_diagram.png "Retriever Diagram")
## Storage

@ -9,7 +9,7 @@ It uses GPT-4V to create image summaries for each slide, embeds the summaries, a
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
![mm-captioning](https://github.com/langchain-ai/langchain/assets/122662504/5277ef6b-d637-43c7-8dc1-9b1567470503)
![Diagram illustrating the multi-modal LLM process with a slide deck, captioning, storage, question input, and answer synthesis with year-over-year growth percentages.](https://github.com/langchain-ai/langchain/assets/122662504/5277ef6b-d637-43c7-8dc1-9b1567470503 "Multi-modal LLM Process Diagram")
## Input

@ -9,7 +9,7 @@ It uses OpenCLIP embeddings to embed all of the slide images and stores them in
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
![mm-mmembd](https://github.com/langchain-ai/langchain/assets/122662504/b3bc8406-48ae-4707-9edf-d0b3a511b200)
![Diagram illustrating the workflow of a multi-modal LLM visual assistant using OpenCLIP embeddings and GPT-4V for question-answering based on slide deck images.](https://github.com/langchain-ai/langchain/assets/122662504/b3bc8406-48ae-4707-9edf-d0b3a511b200 "Workflow Diagram for Multi-modal LLM Visual Assistant")
## Input

@ -9,7 +9,7 @@ It uses OpenCLIP embeddings to embed all of the slide images and stores them in
Given a question, relevat slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.
![mm-mmembd](https://github.com/langchain-ai/langchain/assets/122662504/b9e69bef-d687-4ecf-a599-937e559d5184)
![Diagram illustrating the process of a visual assistant using multi-modal LLM, from slide deck images to OpenCLIP embedding, retrieval, and synthesis with Google Gemini, resulting in an answer.](https://github.com/langchain-ai/langchain/assets/122662504/b9e69bef-d687-4ecf-a599-937e559d5184 "Workflow Diagram for Visual Assistant Using Multi-modal LLM")
## Input

@ -97,15 +97,15 @@ We will first follow the standard MongoDB Atlas setup instructions [here](https:
This can be done by going to the deployement overview page and connecting to you database
![connect.png](_images/connect.png)
![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png "MongoDB Atlas Connect Button")
We then look at the drivers available
![driver.png](_images/driver.png)
![Screenshot showing the MongoDB Atlas drivers section for connecting to the database.](_images/driver.png "MongoDB Atlas Drivers Section")
Among which we will see our URI listed
![uri.png](_images/uri.png)
![Screenshot displaying an example of a MongoDB URI in the connection instructions.](_images/uri.png "MongoDB URI Example")
Let's then set that as an environment variable locally:
@ -131,23 +131,23 @@ Note that you can (and should!) change this to ingest data of your choice
We can first connect to the cluster where our database lives
![cluster.png](_images%2Fcluster.png)
![Screenshot of the MongoDB Atlas interface showing the cluster overview with a 'Connect' button.](_images/cluster.png "MongoDB Atlas Cluster Overview")
We can then navigate to where all our collections are listed
![collections.png](_images%2Fcollections.png)
![Screenshot of the MongoDB Atlas interface showing the collections overview within a database.](_images/collections.png "MongoDB Atlas Collections Overview")
We can then find the collection we want and look at the search indexes for that collection
![search-indexes.png](_images%2Fsearch-indexes.png)
![Screenshot showing the search indexes section in MongoDB Atlas for a specific collection.](_images/search-indexes.png "MongoDB Atlas Search Indexes")
That should likely be empty, and we want to create a new one:
![create.png](_images%2Fcreate.png)
![Screenshot highlighting the 'Create Index' button in MongoDB Atlas.](_images/create.png "MongoDB Atlas Create Index Button")
We will use the JSON editor to create it
![json_editor.png](_images%2Fjson_editor.png)
![Screenshot showing the JSON Editor option for creating a search index in MongoDB Atlas.](_images/json_editor.png "MongoDB Atlas JSON Editor Option")
And we will paste the following JSON in:
@ -165,7 +165,6 @@ And we will paste the following JSON in:
}
}
```
![json.png](_images%2Fjson.png)
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data!
![Screenshot of the JSON configuration for a search index in MongoDB Atlas.](_images/json.png "MongoDB Atlas Search Index JSON Configuration")
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data!

@ -11,7 +11,7 @@ It uses OpenCLIP embeddings to embed all of the photos and stores them in Chroma
Given a question, relevat photos are retrieved and passed to an open source multi-modal LLM of your choice for answer synthesis.
![mm-local](https://github.com/langchain-ai/langchain/assets/122662504/da543b21-052c-4c43-939e-d4f882a45d75)
![Diagram illustrating the visual search process with OpenCLIP embeddings and multi-modal LLM for question-answering, featuring example food pictures and a matcha soft serve answer trace.](https://github.com/langchain-ai/langchain/assets/122662504/da543b21-052c-4c43-939e-d4f882a45d75 "Visual Search Process Diagram")
## Input

@ -11,7 +11,7 @@ It uses an open source multi-modal LLM of your choice to create image summaries
Given a question, relevat photos are retrieved and passed to the multi-modal LLM for answer synthesis.
![mm-caption-local](https://github.com/langchain-ai/langchain/assets/122662504/cd9b3d82-9b06-4a39-8490-7482466baf43)
![Diagram illustrating the visual search process with food pictures, captioning, a database, a question input, and the synthesis of an answer using a multi-modal LLM.](https://github.com/langchain-ai/langchain/assets/122662504/cd9b3d82-9b06-4a39-8490-7482466baf43 "Visual Search Process Diagram")
## Input

Loading…
Cancel
Save