langchain/docs/extras/integrations/providers/symblai_nebula.mdx
Toshish Jawale 852722ea45
Improvements in Nebula LLM (#9226)
- Description: Added improvements in Nebula LLM to perform auto-retry;
more generation parameters supported. Conversation is no longer required
to be passed in the LLM object. Examples are updated.
  - Issue: N/A
  - Dependencies: N/A
  - Tag maintainer: @baskaryan 
  - Twitter handle: symbldotai

---------

Co-authored-by: toshishjawale <toshish@symbl.ai>
2023-08-15 15:33:07 -07:00

19 lines
717 B
Plaintext

# Nebula
This page covers how to use [Nebula](https://symbl.ai/nebula), [Symbl.ai](https://symbl.ai/)'s LLM, ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Nebula wrappers.
## Installation and Setup
- Get an [Nebula API Key](https://info.symbl.ai/Nebula_Private_Beta.html) and set as environment variable `NEBULA_API_KEY`
- Please see the [Nebula documentation](https://docs.symbl.ai/docs/nebula-llm) for more details.
- No time? Visit the [Nebula Quickstart Guide](https://docs.symbl.ai/docs/nebula-quickstart).
### LLM
There exists an Nebula LLM wrapper, which you can access with
```python
from langchain.llms import Nebula
llm = Nebula()
```