pull/375/head
Elvis Saravia 3 months ago
parent 9a6e57bb55
commit 1498f4b279

2
.gitignore vendored

@ -9,7 +9,9 @@ notebooks/__pycache__/
notebooks/state_of_the_union.txt
notebooks/chroma_logs.log
notebooks/.chroma/
notebooks/local_notebooks/
notebooks/.env
pages/research/local_research/
.DS_Store
.vscode

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

@ -964,6 +964,13 @@
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, here is the final system prompt, few-shot demonstrations, and final user question:"
]
},
{
"cell_type": "code",
"execution_count": 129,
@ -1151,6 +1158,87 @@
"print(chat_completion.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Code Infilling\n",
"\n",
"Code infilling deals with predicting missing code given preceding and subsequent code blocks as input. This is particularly important for building applications that enable code completion features like type inferencing and docstring generation.\n",
"\n",
"For this example, we will be using the Code Llama 70B Instruct model hosted by [Fireworks AI](https://fireworks.ai/) as together.ai didn't support this feature as the time of writing this tutorial.\n",
"\n",
"We first need to get a `FIREWORKS_API_KEY` and install the fireworks Python client."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"%%capture\n",
"!pip install fireworks-ai"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"import fireworks.client\n",
"from dotenv import load_dotenv\n",
"import os\n",
"load_dotenv()\n",
"\n",
"fireworks.client.api_key = os.getenv(\"FIREWORKS_API_KEY\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1. Sort the list in descending order.\n",
" 2. Return the first two elements of the sorted list.\n",
"\n",
"Here's the corrected code:\n",
"\n",
"```\n",
"def two_largest_numbers(numbers: List[Number]) -> Tuple[Number]:\n",
" sorted_numbers = sorted(numbers, reverse=True)\n",
" max = sorted_numbers[0]\n",
" second_max = sorted_numbers[1]\n",
" return max, second_\n"
]
}
],
"source": [
"prefix ='''\n",
"def two_largest_numbers(list: List[Number]) -> Tuple[Number]:\n",
" max = None\n",
" second_max = None\n",
" '''\n",
"suffix = '''\n",
" return max, second_max\n",
"'''\n",
"response = await fireworks.client.ChatCompletion.acreate(\n",
" model=\"accounts/fireworks/models/llama-v2-70b-code-instruct\",\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": prefix}, # FIX HERE\n",
" {\"role\": \"user\", \"content\": suffix}, # FIX HERE\n",
" ],\n",
" max_tokens=100,\n",
" temperature=0,\n",
")\n",
"print(response.choices[0].message.content)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},

@ -8,5 +8,6 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Col·lecció de Models"
}

@ -8,5 +8,6 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "LLM-Sammlung"
}

@ -1,13 +1,14 @@
{
"flan": "Flan",
"chatgpt": "ChatGPT",
"llama": "LLaMA",
"code-llama": "Code Llama",
"flan": "Flan",
"gemini": "Gemini",
"gpt-4": "GPT-4",
"llama": "LLaMA",
"mistral-7b": "Mistral 7B",
"gemini": "Gemini",
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"phi-2": "Phi-2",
"collection": "LLM Collection"
}

@ -8,5 +8,6 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Listado de LLMs"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Model Collection"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Collection de modèles"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Collezione di Modelli"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Model Collection"
}

@ -8,5 +8,6 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Model Collection"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Model Collection"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Коллекция LLM"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "LLM Koleksiyonu"
}

@ -8,6 +8,7 @@
"phi-2": "Phi-2",
"mixtral": "Mixtral",
"code-llama": "Code Llama",
"olmo": "OLMo",
"collection": "Model Collection"
}

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,62 @@
# OLMo
In this guide, we provide an overview of the Open Language Mode (OLMo), including prompts and usage examples. The guide also includes tips, applications, limitations, papers, and additional reading materials related to OLMo.
## Introduction to OLMo
The Allen Institute of AI has [released](https://blog.allenai.org/olmo-open-language-model-87ccfc95f580) a new open language model and framework called OLMo. This effort is meant to provide full access to data, training code, models, evaluation code so as to accelerate the study of language models collectively.
Their first release includes four variants at the 7B parameter scale and one model at the 1B scale, all trained on at least 2T tokens. This marks the first of many releases which also includes an upcoming 65B OLMo model.
!["OLMo Models"](../../img/olmo/olmo-models.png)
The releases includes:
- full training data, including the [code](https://github.com/allenai/dolma) that produces the data
- full models weights, [training code](https://github.com/allenai/OLMo), logs, metrics, and inference code
- several checkpoints per model
- [evaluation code](https://github.com/allenai/OLMo-Eval)
- fine-tuning code
All the code, weights, and intermediate checkpoints are released under the [Apache 2.0 License](https://github.com/allenai/OLMo#Apache-2.0-1-ov-file).
## OLMo-7B
Both the OLMo-7B and OLMo-1B models adopt a decoder-only transformer architecture. It follows improvements from other models like PaLM and Llama:
- no biases
- a non-parametric layer norm
- SwiGLU activation function
- Rotary positional embeddings (RoPE)
- a vocabulary of 50,280
## Dolma Dataset
This release also includes the release a pre-training dataset called [Dolma](https://github.com/allenai/dolma) -- a diverse, multi-source corpus of 3 trillion token across 5B documents acquired from 7 different data sources. The creation of Dolma involves steps like language filtering, quality filtering, content filtering, deduplication, multi-source mixing, and tokenization.
!["Dolma Dataset"](../../img/olmo/dolma-dataset.png)
The training dataset includes a 2T-token sample from Dolma. The tokens are concatenated together after appending a special `EOS` token to the end of each document. The training instances include groups of consecutive chunks of 2048 tokens, which are also shuffled.
More training details and hardware specifications to train the models can be found in the paper.
## Results
The models are evaluated on downstream tasks using the [Catwalk](https://github.com/allenai/catwalk). The OLMo models are compared to other several publicly available models like Falcon and Llama 2. Specifically, the model is evaluated on a set of tasks that aim to measure the model's commonsense reasoning abilities. The downstream evaluation suite includes datasets like `piqa` and `hellaswag`. The authors perform zero-shot evaluation using rank classification (i.e., completions are ranked by likelihood) and accuracy is reported. OLMo-7B outperforms all other models on 2 end-tasks and remains top-3 on 8/9 end-tasks. See a summary of the results in the chart below.
!["OLMo Results"](../../img/olmo/olmo-results.png)
## Prompting Guide for OLMo
Coming soon...
---
Figures source: [OLMo: Accelerating the Science of Language Models](https://allenai.org/olmo/olmo-paper.pdf)
## References
- [OLMo: Open Language Model](https://blog.allenai.org/olmo-open-language-model-87ccfc95f580)
- [OLMo: Accelerating the Science of Language Models](https://allenai.org/olmo/olmo-paper.pdf)

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# OLMo
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -4,6 +4,7 @@ The following are the latest papers (sorted by release date) on prompt engineeri
## Overviews
- [A Survey on Hallucination in Large Language Models: Principles,Taxonomy, Challenges, and Open Questions](https://arxiv.org/abs/2311.05232) (November 2023)
- [An RL Perspective on RLHF, Prompting, and Beyond](https://arxiv.org/abs/2310.06147) (October 2023)
- [Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation](https://arxiv.org/abs/2305.16938) (May 2023)
- [Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study](https://arxiv.org/abs/2305.13860) (May 2023)

Loading…
Cancel
Save