mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
852722ea45
- Description: Added improvements in Nebula LLM to perform auto-retry; more generation parameters supported. Conversation is no longer required to be passed in the LLM object. Examples are updated. - Issue: N/A - Dependencies: N/A - Tag maintainer: @baskaryan - Twitter handle: symbldotai --------- Co-authored-by: toshishjawale <toshish@symbl.ai>
19 lines
717 B
Plaintext
19 lines
717 B
Plaintext
# Nebula
|
|
|
|
This page covers how to use [Nebula](https://symbl.ai/nebula), [Symbl.ai](https://symbl.ai/)'s LLM, ecosystem within LangChain.
|
|
It is broken into two parts: installation and setup, and then references to specific Nebula wrappers.
|
|
|
|
## Installation and Setup
|
|
|
|
- Get an [Nebula API Key](https://info.symbl.ai/Nebula_Private_Beta.html) and set as environment variable `NEBULA_API_KEY`
|
|
- Please see the [Nebula documentation](https://docs.symbl.ai/docs/nebula-llm) for more details.
|
|
- No time? Visit the [Nebula Quickstart Guide](https://docs.symbl.ai/docs/nebula-quickstart).
|
|
|
|
### LLM
|
|
|
|
There exists an Nebula LLM wrapper, which you can access with
|
|
```python
|
|
from langchain.llms import Nebula
|
|
llm = Nebula()
|
|
```
|