mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
91d7fd20ae
I originally had only modified the `from_llm` to include the prompt but I realized that if the prompt keys used on the custom prompt didn't match the default prompt, it wouldn't work because of how `apply` works. So I made some changes to the evaluate method to check if the prompt is the default and if not, it will check if the input keys are the same as the prompt key and update the inputs appropriately. Let me know if there is a better way to do this. Also added the custom prompt to the QA eval notebook. |
||
---|---|---|
.. | ||
evaluation | ||
agents.md | ||
chatbots.md | ||
combine_docs.md | ||
evaluation.rst | ||
generate_examples.ipynb | ||
model_laboratory.ipynb | ||
question_answering.md | ||
summarization.md |