openai-cookbook/related_resources.md
Stefano Fiorucci fa1dfa49bd
[related_resources] add haystack to prompting libraries & tools (#682)
Co-authored-by: simonpfish <simonpfish@gmail.com>
2023-09-27 14:06:37 -07:00

7.5 KiB
Raw Blame History

Related resources from around the web

People are writing great tools and papers for improving outputs from GPT. Here are some cool ones we've seen:

Prompting libraries & tools

  • Guidance: A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control.
  • LangChain: A popular Python/JavaScript library for chaining sequences of language model prompts.
  • FLAML (A Fast Library for Automated Machine Learning & Tuning): A Python library for automating selection of models, hyperparameters, and other tunable choices.
  • Chainlit: A Python library for making chatbot interfaces.
  • Guardrails.ai: A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs.
  • Semantic Kernel: A Python/C#/Java library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning.
  • YiVal: An open-source GenAI-Ops tool for tuning and evaluating prompts, retrieval configurations, and model parameters using customizable datasets, evaluation methods, and evolution strategies.
  • Prompttools: Open-source Python tools for testing and evaluating models, vector DBs, and prompts.
  • Outlines: A Python library that provides a domain-specific language to simplify prompting and constrain generation.
  • Promptify: A small Python library for using language models to perform NLP tasks.
  • Scale Spellbook: A paid product for building, comparing, and shipping language model apps.
  • PromptPerfect: A paid product for testing and improving prompts.
  • Weights & Biases: A paid product for tracking model training and prompt engineering experiments.
  • OpenAI Evals: An open-source library for evaluating task performance of language models and prompts.
  • LlamaIndex: A Python library for augmenting LLM apps with data.
  • Arthur Shield: A paid product for detecting toxicity, hallucination, prompt injection, etc.
  • LMQL: A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools.
  • Haystack: Open-source LLM orchestration framework to build customizable, production-ready LLM applications in Python.

Prompting guides

Video courses

Papers on advanced prompting to improve reasoning