mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
491089754d
This PR follows the **Eden AI (LLM + embeddings) integration**. #8633 We added an optional parameter to choose different AI models for providers (like 'text-bison' for provider 'google', 'text-davinci-003' for provider 'openai', etc.). Usage: ```python llm = EdenAI( feature="text", provider="google", params={ "model": "text-bison", # new "temperature": 0.2, "max_tokens": 250, }, ) ``` You can also change the provider + model after initialization ```python llm = EdenAI( feature="text", provider="google", params={ "temperature": 0.2, "max_tokens": 250, }, ) prompt = """ hi """ llm(prompt, providers='openai', model='text-davinci-003') # change provider & model ``` The jupyter notebook as been updated with an example well. Ping: @hwchase17, @baskaryan --------- Co-authored-by: RedhaWassim <rwasssim@gmail.com> Co-authored-by: sam <melaine.samy@gmail.com> |
||
---|---|---|
.. | ||
aleph_alpha.ipynb | ||
Awa.ipynb | ||
azureopenai.ipynb | ||
bedrock.ipynb | ||
bge_huggingface.ipynb | ||
clarifai.ipynb | ||
cohere.ipynb | ||
dashscope.ipynb | ||
deepinfra.ipynb | ||
edenai.ipynb | ||
elasticsearch.ipynb | ||
embaas.ipynb | ||
ernie.ipynb | ||
fake.ipynb | ||
google_vertex_ai_palm.ipynb | ||
gpt4all.ipynb | ||
huggingfacehub.ipynb | ||
index.mdx | ||
instruct_embeddings.ipynb | ||
jina.ipynb | ||
llamacpp.ipynb | ||
localai.ipynb | ||
minimax.ipynb | ||
modelscope_hub.ipynb | ||
mosaicml.ipynb | ||
nlp_cloud.ipynb | ||
openai.ipynb | ||
sagemaker-endpoint.ipynb | ||
self-hosted.ipynb | ||
sentence_transformers.ipynb | ||
spacy_embedding.ipynb | ||
tensorflowhub.ipynb | ||
xinference.ipynb |