Running the Cohere embeddings example from the docs:
```python
from langchain.embeddings import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key= cohere_api_key)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
```
I get the error:
```bash
CohereError(message=res['message'], http_status=response.status_code, headers=response.headers)
cohere.error.CohereError: embed is not an available endpoint on this model
```
This is because the `model` string is set to `medium` which is not
currently available.
From the Cohere docs:
> Currently available models are small and large (default)
Adds release workflow that (1) creates a GitHub release and (2)
publishes built artifacts to PyPI
**Release Workflow**
1. Checkout `master` locally and cut a new branch
1. Run `poetry version <rule>` to version bump (e.g., `poetry version
patch`)
1. Commit changes and push to remote branch
1. Ensure all quality check workflows pass
1. Explicitly tag PR with `release` label
1. Merge to mainline
At this point, a release workflow should be triggered because:
* The PR is closed, targeting `master`, and merged
* `pyproject.toml` has been detected as modified
* The PR had a `release` label
The workflow will then proceed to build the artifacts, create a GitHub
release with release notes and uploaded artifacts, and publish to PyPI.
Example Workflow run:
https://github.com/shoelsch/langchain/actions/runs/3711037455/jobs/6291076898
Example Releases: https://github.com/shoelsch/langchain/releases
--
Note, this workflow is looking for the `PYPI_API_TOKEN` secret, so that
will need to be uploaded to the repository secrets. I tested uploading
as far as hitting a permissions issue due to project ownership in Test
PyPI.
I originally had only modified the `from_llm` to include the prompt but
I realized that if the prompt keys used on the custom prompt didn't
match the default prompt, it wouldn't work because of how `apply` works.
So I made some changes to the evaluate method to check if the prompt is
the default and if not, it will check if the input keys are the same as
the prompt key and update the inputs appropriately.
Let me know if there is a better way to do this.
Also added the custom prompt to the QA eval notebook.
add a chain that applies a prompt to all inputs and then returns not
only an answer but scores it
add examples for question answering and question answering with sources
Small quick fix:
Suggest making the order of the menu the same as it is written on the
page (Getting Started -> Key Concepts). Before the menu order was not
the same as it was on the page. Not sure if this is the only place the
menu is affected.
Mismatch is found here:
https://langchain.readthedocs.io/en/latest/modules/llms.html
Add
[`logit_bias`](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias)
params to OpenAI
See [here](https://beta.openai.com/tokenizer) for the tokenizer.
NB: I see that others (like Cohere) have the same parameter, but since I
don't have an access to it, I don't want to make a mistake.
---
Just to make sure the default "{}" works for openai:
```
from langchain.llms import OpenAI
OPENAI_API_KEY="XXX"
llm = OpenAI(openai_api_key=OPENAI_API_KEY)
llm.generate('Write "test":')
llm = OpenAI(openai_api_key=OPENAI_API_KEY, logit_bias={'9288': -100, '1332': -100, '14402': -100, '6208': -100})
llm.generate('Write "test":')
```