pull/344/head
Elvis Saravia 5 months ago
parent deb57a911b
commit 9ff2eb3b8d

@ -4,6 +4,26 @@ Due to high demand, we've partnered with Maven to deliver a new [cohort-based co
[Elvis Saravia](https://www.linkedin.com/in/omarsar/), who has worked at companies like Meta AI and Elastic, and has years of experience in AI and LLMs, will be the instructor for this course.
This hands-on course will cover prompt engineering techniques/tools, use cases, exercises, and projects for effectively working and building with large language models (LLMs).
This technical, hands-on course will cover prompt engineering techniques/tools, use cases, exercises, and projects for effectively working and building with large language models (LLMs).
Our past learners range from software engineers to AI researchers and practitioners in organizations like LinkedIn, Amazon, JPMorgan Chase & Co., Intuit, Fidelity Investments, Coinbase, Guru, and many others.
Topics we provide training on:
- Taxonomy of Prompting Techniques
- Tactics to Improve Reliability
- Structuring LLM Outputs
- Zero-shot Prompting
- Few-shot In-Context Learning
- Chain of Thought Prompting
- Self-Reflection & Self-Consistency
- ReAcT
- Retrieval Augmented Generation
- Fine-Tuning & RLHF
- Function Calling
- AI Safety & Moderation
- LLM-Powered Agents
- LLM Evaluation
- Adversarial Prompting (Jailbreaking and Prompt Injections)
- Judge LLMs
- Common Real-World Use Cases of LLMs
Our past learners range from software engineers to AI researchers and practitioners in organizations like Microsoft, Google, Apple, Airbnb, LinkedIn, Amazon, JPMorgan Chase & Co., Asana, Intuit, Fidelity Investments, Coinbase, Guru, and many others.

@ -6,7 +6,7 @@ Researchers use prompt engineering to improve the capacity of LLMs on a wide ran
Prompt engineering is not just about designing and developing prompts. It encompasses a wide range of skills and techniques that are useful for interacting and developing with LLMs. It's an important skill to interface, build with, and understand capabilities of LLMs. You can use prompt engineering to improve safety of LLMs and build new capabilities like augmenting LLMs with domain knowledge and external tools.
Motivated by the high interest in developing with LLMs, we have created this new prompt engineering guide that contains all the latest papers, learning guides, models, lectures, references, new LLM capabilities, and tools related to prompt engineering.
Motivated by the high interest in developing with LLMs, we have created this new prompt engineering guide that contains all the latest papers, advanced prompting techniques, learning guides, model-specific prompting guides, lectures, references, new LLM capabilities, and tools related to prompt engineering.
---
Due to high demand, we've partnered with Maven to deliver a new [cohort-based course on Prompt Engineering for LLMs](https://maven.com/dair-ai/prompt-engineering-llms).
@ -15,4 +15,4 @@ Due to high demand, we've partnered with Maven to deliver a new [cohort-based co
This hands-on course will cover prompt engineering techniques/tools, use cases, exercises, and projects for effectively working and building with large language models (LLMs).
Our past learners range from software engineers to AI researchers and practitioners in organizations like LinkedIn, Amazon, JPMorgan Chase & Co., Intuit, Fidelity Investments, Coinbase, Guru, and many others.
Our past learners range from software engineers to AI researchers and practitioners in organizations like Microsoft, Google, Apple, Airbnb, LinkedIn, Amazon, JPMorgan Chase & Co., Asana, Intuit, Fidelity Investments, Coinbase, Guru, and many others.

@ -1,7 +1,9 @@
# Introduction
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs). Researchers use prompt engineering to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning. Developers use prompt engineering to design robust and effective prompting techniques that interface with LLMs and other tools.
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently apply and build with large language models (LLMs) for a wide variety of applications and use cases.
This guide covers the basics of prompts to provide a rough idea of how to use prompts to interact and instruct LLMs.
Prompt engineering skills help to better understand the capabilities and limitations of LLMs. Researchers use prompt engineering to improve safety and the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning. Developers use prompt engineering to design robust and effective prompting techniques that interface with LLMs and other tools.
All examples are tested with `text-davinci-003` using [OpenAI's playground](https://platform.openai.com/playground) unless otherwise specified. The model uses the default configurations, i.e., `temperature=0.7` and `top-p=1`.
This comprehensive guide covers the theory and practical aspects of prompt engineering and how to leverage the best prompting techniques to interact and build with LLMs.
All examples are tested with `gpt-3.5-turbo` using the [OpenAI's playground](https://platform.openai.com/playground) unless otherwise specified. The model uses the default configurations, i.e., `temperature=0.7` and `top-p=1`. The prompts should work with other models that have similar capabilities as `gpt-3.5-turbo` but you might get completely different results.

@ -1,12 +1,12 @@
# LLM Settings
When working with prompts, you interact with the LLM via an API or directly. You can configure a few parameters to get different results for your prompts.
When designing and testing prompts, you typically interact with the LLM via an API. You can configure a few parameters to get different results for your prompts. Tweaking these settings are important to improve reliability and desirability of responses and it takes experimentation to figure out the proper settings for your use cases. Below are the common settings you will come across when using different LLM providers:
**Temperature** - In short, the lower the `temperature`, the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness, which encourages more diverse or creative outputs. You are essentially increasing the weights of the other possible tokens. In terms of application, you might want to use a lower temperature value for tasks like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks, it might be beneficial to increase the temperature value.
**Top_p** - Similarly, with `top_p`, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value.
The general recommendation is to alter temperature or top_p, not both.
The general recommendation is to alter temperature or `top_p`, not both.
**Max Length** - You can manage the number of tokens the model generates by adjusting the 'max length'. Specifying a max length helps you prevent long or irrelevant responses and control costs.

@ -1,12 +1,56 @@
# Our Services
## Professional Training
We provide professional training for organizations and startups to upskill their teams on prompt engineering for large language models (LLMs).
We provide professional training for organizations and startups to train their workforce on prompt engineering, building with large language models (LLMs), and leveraging Generative AI for business.
Our training teaches how to efficiently and effectively use LLMs and leverage Generative AI for business. It covers the best and latest prompting techniques that you can apply to a variety of use cases that range from building long article summarizers to prompt injection detectors all the way to LLM-powered evaluators. The goal is for you to learn how to apply advanced prompting techniques to help you effectively build advanced LLM-powered applications and products, and use it for professional growth.
Topics we provide training on:
- Taxonomy of Prompting Techniques
- Tactics to Improve Reliability
- Structuring LLM Outputs
- Zero-shot Prompting
- Few-shot In-Context Learning
- Chain of Thought Prompting
- Self-Reflection & Self-Consistency
- ReAcT
- Retrieval Augmented Generation
- Fine-Tuning & RLHF
- Function Calling
- AI Safety & Moderation
- LLM-Powered Agents
- LLM Evaluation
- Adversarial Prompting (Jailbreaking and Prompt Injections)
- Judge LLMs
- Common Real-World Use Cases of LLMs
... and much more
[Schedule A Call](https://calendly.com/elvisosaravia/dair-ai-professional-training)
## Consulting & Advisory
We provide consulting and advisory to extract business value from large language models (LLMs).
We provide technical consulting and advisory to extract business value from large language models (LLMs) and Generative AI more broadly. We can support your teams building with LLMs on topics including:
- Taxonomy of Prompting Techniques
- Tactics to Improve Reliability
- Structuring LLM Outputs
- Zero-shot Prompting
- Few-shot In-Context Learning
- Chain of Thought Prompting
- Self-Reflection & Self-Consistency
- ReAcT
- Retrieval Augmented Generation
- Fine-Tuning & RLHF
- Function Calling
- AI Safety & Moderation
- LLM-Powered Agents
- LLM Evaluation
- Adversarial Prompting (Jailbreaking and Prompt Injections)
- Judge LLMs
- Common Real-World Use Cases of LLMs
... and much more
[Schedule A Call](https://calendly.com/elvisosaravia/dair-ai-consulting)

Loading…
Cancel
Save