Adds Ollama as an LLM. Ollama can run various open source models locally e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing them. @rlancemartin @hwchase17 --------- Co-authored-by: Lance Martin <lance@langchain.dev>