langchain/docs/integrations/bananadev.md
Leonid Ganeline e2d7677526
docs: compound ecosystem and integrations (#4870)
# Docs: compound ecosystem and integrations

**Problem statement:** We have a big overlap between the
References/Integrations and Ecosystem/LongChain Ecosystem pages. It
confuses users. It creates a situation when new integration is added
only on one of these pages, which creates even more confusion.
- removed References/Integrations page (but move all its information
into the individual integration pages - in the next PR).
- renamed Ecosystem/LongChain Ecosystem into Integrations/Integrations.
I like the Ecosystem term. It is more generic and semantically richer
than the Integration term. But it mentally overloads users. The
`integration` term is more concrete.
UPDATE: after discussion, the Ecosystem is the term.
Ecosystem/Integrations is the page (in place of Ecosystem/LongChain
Ecosystem).

As a result, a user gets a single place to start with the individual
integration.
2023-05-18 09:29:57 -07:00

2.2 KiB

Banana

This page covers how to use the Banana ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Banana wrappers.

Installation and Setup

  • Install with pip install banana-dev
  • Get an Banana api key and set it as an environment variable (BANANA_API_KEY)

Define your Banana Template

If you want to use an available language model template you can find one here. This template uses the Palmyra-Base model by Writer. You can check out an example Banana repository here.

Build the Banana app

Banana Apps must include the "output" key in the return json. There is a rigid response structure.

# Return the results as a dictionary
result = {'output': result}

An example inference function would be:

def inference(model_inputs:dict) -> dict:
    global model
    global tokenizer

    # Parse out your arguments
    prompt = model_inputs.get('prompt', None)
    if prompt == None:
        return {'message': "No prompt provided"}

    # Run the model
    input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda()
    output = model.generate(
        input_ids,
        max_length=100,
        do_sample=True,
        top_k=50,
        top_p=0.95,
        num_return_sequences=1,
        temperature=0.9,
        early_stopping=True,
        no_repeat_ngram_size=3,
        num_beams=5,
        length_penalty=1.5,
        repetition_penalty=1.5,
        bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
        )

    result = tokenizer.decode(output[0], skip_special_tokens=True)
    # Return the results as a dictionary
    result = {'output': result}
    return result

You can find a full example of a Banana app here.

Wrappers

LLM

There exists an Banana LLM wrapper, which you can access with

from langchain.llms import Banana

You need to provide a model key located in the dashboard:

llm = Banana(model_key="YOUR_MODEL_KEY")