Love the project, a ton of fun!
I think the PR is pretty self-explanatory, happy to make any changes! I
am working on using it in an `LLMBashChain` and may update as that
progresses.
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Add support for calling HuggingFace embedding models
using the HuggingFaceHub Inference API. New class mirrors
the existing HuggingFaceHub LLM implementation. Currently
only supports 'sentence-transformers' models.
Closes#86
Add MemoryChain and ConversationChain as chains that take a docstore in
addition to the prompt, and use the docstore to stuff context into the
prompt. This can be used to have an ongoing conversation with a chatbot.
Probably needs a bit of refactoring for code quality
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Also updated docs, and noticed an issue with the add_texts method on
VectorStores that I had missed before -- the metadatas arg should be
required to match the classmethod which initializes the VectorStores
(the add_example methods break otherwise in the ExampleSelectors)
this will break atm but wanted to get thoughts on implementation.
1. should add() be on docstore interface?
2. should InMemoryDocstore change to take a list of documents as init?
(makes this slightly easier to implement in FAISS -- if we think it is
less clean then could expose a method to get the number of documents
currently in the dict, and perform the logic of creating the necessary
dictionary in the FAISS.add_texts method.
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
`SQLDatabase` now accepts two `init` arguments:
1. `ignore_tables` to pass in a list of tables to not search over
2. `include_tables` to restrict to a list of tables to consider
This is a simple proof of concept of using external files as templates.
I'm still feeling my way around the codebase.
As a user, I want to use files as prompts, so it will be easier to
manage and test prompts.
The future direction is to use a template engine, most likely Mako.
This fixes Issue #104
The tests for HF Embeddings is skipped because of the segfault issue
mentioned there. Perhaps, a new issue should be created for that?
lots of kwargs! generation docs here:
https://docs.nlpcloud.com/#generation
This somewhat breaks the paradigm introduced in LLM base class as the
stop sequence isn't a list, and should rightfully be introduced at the
time of initialization of the class, along with the other kwargs that
depend on its presence (e.g. remove_end_sequence, etc.) curious if you'd
want to refactor LLM base class to take out stop as a specific named
kwarg?
Add support for huggingface hub
I could not find a good way to enforce stop tokens over the huggingface
hub api - that needs to hopefully be cleaned up in the future