docs `ollama` pages (#14561)

added provider page; fixed broken links.
pull/13850/head
Leonid Ganeline 7 months ago committed by GitHub
parent a4992ffada
commit 1bf84c3056
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -69,7 +69,7 @@
"\n",
"[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n",
" \n",
"The instructions [here](docs/integrations/llms/ollama) provide details, which we summarize:\n",
"The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
" \n",
"* [Download and run](https://ollama.ai/download) the app\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama2`\n",
@ -197,10 +197,10 @@
"\n",
"### Ollama\n",
"\n",
"With [Ollama](docs/integrations/llms/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
"With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
"\n",
"* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama), e.g., `ollama pull llama2:13b`\n",
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html)"
]
},
@ -608,7 +608,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -533,7 +533,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -440,7 +440,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -0,0 +1,55 @@
# Ollama
>[Ollama](https://ollama.ai/) is a python library. It allows you to run open-source large language models,
> such as LLaMA2, locally.
>
>`Ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile.
>It optimizes setup and configuration details, including GPU usage.
>For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
See [this guide](https://python.langchain.com/docs/guides/local_llms#quickstart) for more details
on how to use `Ollama` with LangChain.
## Installation and Setup
Follow [these instructions](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama)
to set up and run a local Ollama instance.
To use, you should set up the environment variables `ANYSCALE_API_BASE` and
`ANYSCALE_API_KEY`.
## LLM
```python
from langchain.llms import Ollama
```
See the notebook example [here](/docs/integrations/llms/ollama).
## Chat Models
### Chat Ollama
```python
from langchain.chat_models import ChatOllama
```
See the notebook example [here](/docs/integrations/chat/ollama).
### Ollama functions
```python
from langchain_experimental.llms.ollama_functions import OllamaFunctions
```
See the notebook example [here](/docs/integrations/chat/ollama_functions).
## Embedding models
```python
from langchain.embeddings import OllamaEmbeddings
```
See the notebook example [here](/docs/integrations/text_embedding/ollama).

@ -215,7 +215,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

Loading…
Cancel
Save