Prompt-Engineering-Guide/pages/models/collection.en.mdx
2023-04-03 22:28:20 -06:00

65 lines
6.9 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Model Collection
import { Callout, FileTree } from 'nextra-theme-docs'
<Callout emoji="⚠️">
This section is under heavy development.
</Callout>
This section consists of a collection and summary of notable and foundational LLMs. (Data adopted from [Papers with Code](https://paperswithcode.com/methods/category/language-models) and the recent work by [Zhao et al. (2023)](https://arxiv.org/pdf/2303.18223.pdf).
## Models
| Model | Release Date | Description |
| --- | --- | --- |
| [BERT](https://arxiv.org/abs/1810.04805)| 2018 | Bidirectional Encoder Representations from Transformers |
| [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) | 2018 | Improving Language Understanding by Generative Pre-Training |
| [RoBERTa](https://arxiv.org/abs/1907.11692) | 2019 | A Robustly Optimized BERT Pretraining Approach |
| [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) | 2019 | Language Models are Unsupervised Multitask Learners |
| [T5](https://arxiv.org/abs/1910.10683) | 2019 | Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer |
| [BART](https://arxiv.org/abs/1910.13461) | 2019 | Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension |
| [ALBERT](https://arxiv.org/abs/1909.11942) |2019 | A Lite BERT for Self-supervised Learning of Language Representations |
| [XLNet](https://arxiv.org/abs/1906.08237) | 2019 | Generalized Autoregressive Pretraining for Language Understanding and Generation |
| [CTRL](https://arxiv.org/abs/1909.05858) |2019 | CTRL: A Conditional Transformer Language Model for Controllable Generation |
| [ERNIE](https://arxiv.org/abs/1904.09223v1) | 2019| ERNIE: Enhanced Representation through Knowledge Integration |
| [GShard](https://arxiv.org/abs/2006.16668v1) | 2020 | GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding |
| [GPT-3](https://arxiv.org/abs/2005.14165) | 2020 | Language Models are Few-Shot Learners |
| [LaMDA](https://arxiv.org/abs/2201.08239v3) | 2021 | LaMDA: Language Models for Dialog Applications |
| [PanGu-α](https://arxiv.org/abs/2104.12369v1) | 2021 | PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation |
| [mT5](https://arxiv.org/abs/2010.11934v3) | 2021 | mT5: A massively multilingual pre-trained text-to-text transformer |
| [CPM-2](https://arxiv.org/abs/2106.10715v3) | 2021 | CPM-2: Large-scale Cost-effective Pre-trained Language Models |
| [T0](https://arxiv.org/abs/2110.08207) |2021 |Multitask Prompted Training Enables Zero-Shot Task Generalization |
| [HyperCLOVA](https://arxiv.org/abs/2109.04650) | 2021 | What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers |
| [Codex](https://arxiv.org/abs/2107.03374v2) |2021 |Evaluating Large Language Models Trained on Code |
| [ERNIE 3.0](https://arxiv.org/abs/2107.02137v1) | 2021 | ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation|
| [Jurassic-1](https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf) | 2021 | Jurassic-1: Technical Details and Evaluation |
| [FLAN](https://arxiv.org/abs/2109.01652v5) | 2021 | Finetuned Language Models Are Zero-Shot Learners |
| [MT-NLG](https://arxiv.org/abs/2201.11990v3) | 2021 | Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model|
| [Yuan 1.0](https://arxiv.org/abs/2110.04725v2) | 2021| Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning |
| [WebGPT](https://arxiv.org/abs/2112.09332v3) | 2021 | WebGPT: Browser-assisted question-answering with human feedback |
| [Gopher](https://arxiv.org/abs/2112.11446v2) |2021 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher |
| [ERNIE 3.0 Titan](https://arxiv.org/abs/2112.12731v1) |2021 | ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation |
| [GLaM](https://arxiv.org/abs/2112.06905) | 2021 | GLaM: Efficient Scaling of Language Models with Mixture-of-Experts |
| [InstructGPT](https://arxiv.org/abs/2203.02155v1) | 2022 | Training language models to follow instructions with human feedback |
| [GPT-NeoX-20B](https://arxiv.org/abs/2204.06745v1) | 2022 | GPT-NeoX-20B: An Open-Source Autoregressive Language Model |
| [AlphaCode](https://arxiv.org/abs/2203.07814v1) | 2022 | Competition-Level Code Generation with AlphaCode |
| [CodeGen](https://arxiv.org/abs/2203.13474v5) | 2022 | CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis |
| [Chinchilla](https://arxiv.org/abs/2203.15556) | 2022 | Shows that for a compute budget, the best performances are not achieved by the largest models but by smaller models trained on more data. |
| [Tk-Instruct](https://arxiv.org/abs/2204.07705v3) | 2022 | Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks |
| [UL2](https://arxiv.org/abs/2205.05131v3) | 2022 | UL2: Unifying Language Learning Paradigms |
| [PaLM](https://arxiv.org/abs/2204.02311v5) |2022| PaLM: Scaling Language Modeling with Pathways |
| [OPT](https://arxiv.org/abs/2205.01068) | 2022 | OPT: Open Pre-trained Transformer Language Models |
| [BLOOM](https://arxiv.org/abs/2211.05100v3) | 2022 | BLOOM: A 176B-Parameter Open-Access Multilingual Language Model |
| [GLM-130B](https://arxiv.org/abs/2210.02414v1) | 2022 | GLM-130B: An Open Bilingual Pre-trained Model |
| [AlexaTM](https://arxiv.org/abs/2208.01448v2) | 2022 | AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model |
| [Flan-T5](https://arxiv.org/abs/2210.11416v5) | 2022 | Scaling Instruction-Finetuned Language Models |
| [Sparrow](https://arxiv.org/abs/2209.14375) | 2022 | Improving alignment of dialogue agents via targeted human judgements |
| [U-PaLM](https://arxiv.org/abs/2210.11399v2) | 2022 | Transcending Scaling Laws with 0.1% Extra Compute |
| [mT0](https://arxiv.org/abs/2211.01786v1) | 2022 | Crosslingual Generalization through Multitask Finetuning |
| [Galactica](https://arxiv.org/abs/2211.09085v1) | 2022 | Galactica: A Large Language Model for Science |
| [OPT-IML](https://arxiv.org/abs/2212.12017v3) | 2022 | OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization |
| [LLaMA](https://arxiv.org/abs/2302.13971v1) | 2023 | LLaMA: Open and Efficient Foundation Language Models |
| [GPT-4](https://arxiv.org/abs/2303.08774v3) | 2023 |GPT-4 Technical Report |
| [PanGu-Σ](https://arxiv.org/abs/2303.10845v1) | 2023 | PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing |
| [BloombergGPT](https://arxiv.org/abs/2303.17564v1)| 2023 |BloombergGPT: A Large Language Model for Finance|