Minor edits to QA docs (#7507)

Small clean-ups
This commit is contained in:
Lance Martin 2023-07-10 22:15:05 -07:00 committed by GitHub
parent 5171c3bcca
commit 4a94f56258
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -65,7 +65,7 @@ index.query(question)
Of course, some users do not wnat this level of abstraction.
Of course, some users do not want this level of abstraction.
Below, we will discuss each stage in more detail.
@ -113,13 +113,13 @@ Here are the three pieces together:
#### 1.2.1 Integrations
`Data Loaders`
`Document Loaders`
* Browse the > 120 data loader integrations [here](https://integrations.langchain.com/).
* See further documentation on loaders [here](https://python.langchain.com/docs/modules/data_connection/document_loaders/).
`Data Transformers`
`Document Transformers`
* All can ingest loaded `Documents` and process them (e.g., split).
@ -133,7 +133,7 @@ Here are the three pieces together:
#### 1.2.2 Retaining metadata
`Context-aware splitters` keep the location or "context" of each split in the origional `Document`:
`Context-aware splitters` keep the location ("context") of each split in the origional `Document`:
* [Markdown files](https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA)
* [Code (py or js)](https://python.langchain.com/docs/modules/data_connection/document_loaders/integrations/source_code)
@ -171,7 +171,7 @@ For example, SVMs (see thread [here](https://twitter.com/karpathy/status/1647025
LangChain [has many retrievers](https://python.langchain.com/docs/modules/data_connection/retrievers/) including, but not limited to, vectorstores.
All retrievers implement some common, useful methods, such as `get_relevant_documents()`.
All retrievers implement some common methods, such as `get_relevant_documents()`.
```python
@ -222,7 +222,7 @@ len(unique_docs)
### 3.1 Getting started
Distill the retried documents into an answer using an LLM (e.g., `gpt-3.5-turbo`) with `RetrievalQA` chain.
Distill the retrieved documents into an answer using an LLM (e.g., `gpt-3.5-turbo`) with `RetrievalQA` chain.
```python
@ -247,9 +247,9 @@ qa_chain({"query": question})
`LLMs`
* Browse the > 55 model integrations [here](https://integrations.langchain.com/).
* Browse the > 55 LLM integrations [here](https://integrations.langchain.com/).
* See further documentation on vectorstores [here](https://python.langchain.com/docs/modules/model_io/models/).
* See further documentation on LLMs [here](https://python.langchain.com/docs/modules/model_io/models/).
#### 3.2.2 Running LLMs locally
@ -355,7 +355,7 @@ result
#### 3.2.5 Customizing how pass retrieved documents to the LLM
#### 3.2.5 Customizing retrieved docs in the LLM prompt
Retrieved documents can be fed to an LLM for answer distillation in a few different ways.