forked from Archives/langchain
You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
91d7fd20ae
I originally had only modified the `from_llm` to include the prompt but I realized that if the prompt keys used on the custom prompt didn't match the default prompt, it wouldn't work because of how `apply` works. So I made some changes to the evaluate method to check if the prompt is the default and if not, it will check if the input keys are the same as the prompt key and update the inputs appropriately. Let me know if there is a better way to do this. Also added the custom prompt to the QA eval notebook. |
2 years ago | |
---|---|---|
.. | ||
evaluation | 2 years ago | |
agents.md | 2 years ago | |
chatbots.md | 2 years ago | |
combine_docs.md | 2 years ago | |
evaluation.rst | 2 years ago | |
generate_examples.ipynb | 2 years ago | |
model_laboratory.ipynb | 2 years ago | |
question_answering.md | 2 years ago | |
summarization.md | 2 years ago |