diff --git a/templates/mongo-parent-document-retrieval/README.md b/templates/mongo-parent-document-retrieval/README.md index 5ba805f624..3d9adaf16e 100644 --- a/templates/mongo-parent-document-retrieval/README.md +++ b/templates/mongo-parent-document-retrieval/README.md @@ -97,7 +97,7 @@ We will first follow the standard MongoDB Atlas setup instructions [here](https: 2. Create a new project (if not already done) 3. Locate your MongoDB URI. -This can be done by going to the deployement overview page and connecting to you database +This can be done by going to the deployment overview page and connecting to you database ![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png "MongoDB Atlas Connect Button") diff --git a/templates/rag-chroma-multi-modal-multi-vector/README.md b/templates/rag-chroma-multi-modal-multi-vector/README.md index 9f3b6acd9f..36a5ae0c28 100644 --- a/templates/rag-chroma-multi-modal-multi-vector/README.md +++ b/templates/rag-chroma-multi-modal-multi-vector/README.md @@ -7,7 +7,7 @@ This template create a visual assistant for slide decks, which often contain vis It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Chroma. -Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis. +Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis. ![Diagram illustrating the multi-modal LLM process with a slide deck, captioning, storage, question input, and answer synthesis with year-over-year growth percentages.](https://github.com/langchain-ai/langchain/assets/122662504/5277ef6b-d637-43c7-8dc1-9b1567470503 "Multi-modal LLM Process Diagram") @@ -15,7 +15,7 @@ Given a question, relevat slides are retrieved and passed to GPT-4V for answer s Supply a slide deck as pdf in the `/docs` directory. -By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company. +By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company. Example questions to ask can be: ``` diff --git a/templates/rag-chroma-multi-modal/README.md b/templates/rag-chroma-multi-modal/README.md index 12e11a8cd0..8373bd6439 100644 --- a/templates/rag-chroma-multi-modal/README.md +++ b/templates/rag-chroma-multi-modal/README.md @@ -7,7 +7,7 @@ This template create a visual assistant for slide decks, which often contain vis It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma. -Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis. +Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis. ![Diagram illustrating the workflow of a multi-modal LLM visual assistant using OpenCLIP embeddings and GPT-4V for question-answering based on slide deck images.](https://github.com/langchain-ai/langchain/assets/122662504/b3bc8406-48ae-4707-9edf-d0b3a511b200 "Workflow Diagram for Multi-modal LLM Visual Assistant") @@ -15,7 +15,7 @@ Given a question, relevat slides are retrieved and passed to GPT-4V for answer s Supply a slide deck as pdf in the `/docs` directory. -By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company. +By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company. Example questions to ask can be: ``` @@ -37,7 +37,7 @@ You can select different embedding model options (see results [here](https://git The first time you run the app, it will automatically download the multimodal embedding model. -By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`. +By default, LangChain will use an embedding model with moderate performance but lower memory requirements, `ViT-H-14`. You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`: ``` diff --git a/templates/rag-gemini-multi-modal/README.md b/templates/rag-gemini-multi-modal/README.md index 6437e08d97..fb9b0bc8bd 100644 --- a/templates/rag-gemini-multi-modal/README.md +++ b/templates/rag-gemini-multi-modal/README.md @@ -7,7 +7,7 @@ This template create a visual assistant for slide decks, which often contain vis It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma. -Given a question, relevat slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis. +Given a question, relevant slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis. ![Diagram illustrating the process of a visual assistant using multi-modal LLM, from slide deck images to OpenCLIP embedding, retrieval, and synthesis with Google Gemini, resulting in an answer.](https://github.com/langchain-ai/langchain/assets/122662504/b9e69bef-d687-4ecf-a599-937e559d5184 "Workflow Diagram for Visual Assistant Using Multi-modal LLM") @@ -15,7 +15,7 @@ Given a question, relevat slides are retrieved and passed to [Google Gemini](htt Supply a slide deck as pdf in the `/docs` directory. -By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company. +By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company. Example questions to ask can be: ``` @@ -37,7 +37,7 @@ You can select different embedding model options (see results [here](https://git The first time you run the app, it will automatically download the multimodal embedding model. -By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`. +By default, LangChain will use an embedding model with moderate performance but lower memory requirements, `ViT-H-14`. You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`: ``` diff --git a/templates/rag-mongo/README.md b/templates/rag-mongo/README.md index 4b7bf598c6..5642440735 100644 --- a/templates/rag-mongo/README.md +++ b/templates/rag-mongo/README.md @@ -95,7 +95,7 @@ We will first follow the standard MongoDB Atlas setup instructions [here](https: 2. Create a new project (if not already done) 3. Locate your MongoDB URI. -This can be done by going to the deployement overview page and connecting to you database +This can be done by going to the deployment overview page and connecting to you database ![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png "MongoDB Atlas Connect Button") diff --git a/templates/rag-multi-modal-mv-local/README.md b/templates/rag-multi-modal-mv-local/README.md index 23311ba4e6..cf3f0791c4 100644 --- a/templates/rag-multi-modal-mv-local/README.md +++ b/templates/rag-multi-modal-mv-local/README.md @@ -9,7 +9,7 @@ This template demonstrates how to perform private visual search and question-ans It uses an open source multi-modal LLM of your choice to create image summaries for each photos, embeds the summaries, and stores them in Chroma. -Given a question, relevat photos are retrieved and passed to the multi-modal LLM for answer synthesis. +Given a question, relevant photos are retrieved and passed to the multi-modal LLM for answer synthesis. ![Diagram illustrating the visual search process with food pictures, captioning, a database, a question input, and the synthesis of an answer using a multi-modal LLM.](https://github.com/langchain-ai/langchain/assets/122662504/cd9b3d82-9b06-4a39-8490-7482466baf43 "Visual Search Process Diagram")