From f23b839ccab6b30e17b03f8b21fe1a7e0e21f5d7 Mon Sep 17 00:00:00 2001 From: Seiya Sasaki Date: Wed, 12 Apr 2023 21:24:42 +0900 Subject: [PATCH] add some Japanese translation for the page of Model Collection (#77) --- pages/models/collection.jp.mdx | 48 +++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/models/collection.jp.mdx b/pages/models/collection.jp.mdx index 0e5b99d..9f8c46a 100644 --- a/pages/models/collection.jp.mdx +++ b/pages/models/collection.jp.mdx @@ -1,32 +1,32 @@ -# Model Collection +# モデル一覧 import { Callout, FileTree } from 'nextra-theme-docs' - This section is under heavy development. + このセクションの内容は、鋭意開発進行中です。 -This section consists of a collection and summary of notable and foundational LLMs. (Data adopted from [Papers with Code](https://paperswithcode.com/methods/category/language-models) and the recent work by [Zhao et al. (2023)](https://arxiv.org/pdf/2303.18223.pdf). +このセクションには、注目すべきLLMの基礎技術(モデル)の一覧とその概要をまとめています([Papers with Code](https://paperswithcode.com/methods/category/language-models)と[Zhao et al. (2023)](https://arxiv.org/pdf/2303.18223.pdf) による直近の研究成果を元に一覧を作成しています)。 ## Models -| Model | Release Date | Description | +| モデル名 | 発表された年 | 概要説明 | | --- | --- | --- | -| [BERT](https://arxiv.org/abs/1810.04805)| 2018 | Bidirectional Encoder Representations from Transformers | -| [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) | 2018 | Improving Language Understanding by Generative Pre-Training | -| [RoBERTa](https://arxiv.org/abs/1907.11692) | 2019 | A Robustly Optimized BERT Pretraining Approach | -| [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) | 2019 | Language Models are Unsupervised Multitask Learners | -| [T5](https://arxiv.org/abs/1910.10683) | 2019 | Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer | -| [BART](https://arxiv.org/abs/1910.13461) | 2019 | Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension | -| [ALBERT](https://arxiv.org/abs/1909.11942) |2019 | A Lite BERT for Self-supervised Learning of Language Representations | -| [XLNet](https://arxiv.org/abs/1906.08237) | 2019 | Generalized Autoregressive Pretraining for Language Understanding and Generation | -| [CTRL](https://arxiv.org/abs/1909.05858) |2019 | CTRL: A Conditional Transformer Language Model for Controllable Generation | -| [ERNIE](https://arxiv.org/abs/1904.09223v1) | 2019| ERNIE: Enhanced Representation through Knowledge Integration | -| [GShard](https://arxiv.org/abs/2006.16668v1) | 2020 | GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding | -| [GPT-3](https://arxiv.org/abs/2005.14165) | 2020 | Language Models are Few-Shot Learners | -| [LaMDA](https://arxiv.org/abs/2201.08239v3) | 2021 | LaMDA: Language Models for Dialog Applications | -| [PanGu-α](https://arxiv.org/abs/2104.12369v1) | 2021 | PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation | -| [mT5](https://arxiv.org/abs/2010.11934v3) | 2021 | mT5: A massively multilingual pre-trained text-to-text transformer | +| [BERT](https://arxiv.org/abs/1810.04805)| 2018 | Transformer による双方向(Bidirectional)エンコーダーの特徴表現を利用したモデル | +| [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) | 2018 | 事前学習を利用した生成モデルにより、自然言語の理解を進展させた | +| [RoBERTa](https://arxiv.org/abs/1907.11692) | 2019 | 頑健性(Robustness)を重視して BERT を最適化する事前学習のアプローチ | +| [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) | 2019 | 自然言語モデルが、教師なし学習によってマルチタスクをこなせるようになるということを実証 | +| [T5](https://arxiv.org/abs/1910.10683) | 2019 | フォーマットを統一した Text-to-Text Transformer を用いて、転移学習の限界を探索 | +| [BART](https://arxiv.org/abs/1910.13461) | 2019 | 自然言語の生成、翻訳、理解のために、 Sequence-to-Sequence な事前学習モデルのノイズを除去した | +| [ALBERT](https://arxiv.org/abs/1909.11942) | 2019 | 言語表現を自己教師学習するための BERT 軽量(Lite)化モデル | +| [XLNet](https://arxiv.org/abs/1906.08237) | 2019 | 自然言語の理解と生成のための自己回帰事前学習の一般化 | +| [CTRL](https://arxiv.org/abs/1909.05858) | 2019 | CTRL: 生成モデルをコントロール可能にするための、条件付き Transformer 言語モデル | +| [ERNIE](https://arxiv.org/abs/1904.09223v1) | 2019 | ERNIE: 知識の統合を通じて特徴表現を高度化 | +| [GShard](https://arxiv.org/abs/2006.16668v1) | 2020 | GShard: 条件付き演算と自動シャーディング(Sharding)を用いた巨大モデルのスケーリング | +| [GPT-3](https://arxiv.org/abs/2005.14165) | 2020 | 自然言語モデルが、 Few-Shot で十分学習できるということを実証 | +| [LaMDA](https://arxiv.org/abs/2201.08239v3) | 2021 | LaMDA: 対話(Dialogue)アプリケーションのための自然言語モデル | +| [PanGu-α](https://arxiv.org/abs/2104.12369v1) | 2021 | PanGu-α: 自動並列演算を用いて自己回帰事前学習された、中国語大規模言語モデル | +| [mT5](https://arxiv.org/abs/2010.11934v3) | 2021 | mT5: 多言語で大規模に事前学習された text-to-text transformer | | [CPM-2](https://arxiv.org/abs/2106.10715v3) | 2021 | CPM-2: Large-scale Cost-effective Pre-trained Language Models | | [T0](https://arxiv.org/abs/2110.08207) |2021 |Multitask Prompted Training Enables Zero-Shot Task Generalization | | [HyperCLOVA](https://arxiv.org/abs/2109.04650) | 2021 | What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers | @@ -37,8 +37,8 @@ This section consists of a collection and summary of notable and foundational LL | [MT-NLG](https://arxiv.org/abs/2201.11990v3) | 2021 | Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model| | [Yuan 1.0](https://arxiv.org/abs/2110.04725v2) | 2021| Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning | | [WebGPT](https://arxiv.org/abs/2112.09332v3) | 2021 | WebGPT: Browser-assisted question-answering with human feedback | -| [Gopher](https://arxiv.org/abs/2112.11446v2) |2021 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher | -| [ERNIE 3.0 Titan](https://arxiv.org/abs/2112.12731v1) |2021 | ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation | +| [Gopher](https://arxiv.org/abs/2112.11446v2) | 2021 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher | +| [ERNIE 3.0 Titan](https://arxiv.org/abs/2112.12731v1) | 2021 | ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation | | [GLaM](https://arxiv.org/abs/2112.06905) | 2021 | GLaM: Efficient Scaling of Language Models with Mixture-of-Experts | | [InstructGPT](https://arxiv.org/abs/2203.02155v1) | 2022 | Training language models to follow instructions with human feedback | | [GPT-NeoX-20B](https://arxiv.org/abs/2204.06745v1) | 2022 | GPT-NeoX-20B: An Open-Source Autoregressive Language Model | @@ -47,7 +47,7 @@ This section consists of a collection and summary of notable and foundational LL | [Chinchilla](https://arxiv.org/abs/2203.15556) | 2022 | Shows that for a compute budget, the best performances are not achieved by the largest models but by smaller models trained on more data. | | [Tk-Instruct](https://arxiv.org/abs/2204.07705v3) | 2022 | Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks | | [UL2](https://arxiv.org/abs/2205.05131v3) | 2022 | UL2: Unifying Language Learning Paradigms | -| [PaLM](https://arxiv.org/abs/2204.02311v5) |2022| PaLM: Scaling Language Modeling with Pathways | +| [PaLM](https://arxiv.org/abs/2204.02311v5) | 2022 | PaLM: Scaling Language Modeling with Pathways | | [OPT](https://arxiv.org/abs/2205.01068) | 2022 | OPT: Open Pre-trained Transformer Language Models | | [BLOOM](https://arxiv.org/abs/2211.05100v3) | 2022 | BLOOM: A 176B-Parameter Open-Access Multilingual Language Model | | [GLM-130B](https://arxiv.org/abs/2210.02414v1) | 2022 | GLM-130B: An Open Bilingual Pre-trained Model | @@ -59,6 +59,6 @@ This section consists of a collection and summary of notable and foundational LL | [Galactica](https://arxiv.org/abs/2211.09085v1) | 2022 | Galactica: A Large Language Model for Science | | [OPT-IML](https://arxiv.org/abs/2212.12017v3) | 2022 | OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization | | [LLaMA](https://arxiv.org/abs/2302.13971v1) | 2023 | LLaMA: Open and Efficient Foundation Language Models | -| [GPT-4](https://arxiv.org/abs/2303.08774v3) | 2023 |GPT-4 Technical Report | +| [GPT-4](https://arxiv.org/abs/2303.08774v3) | 2023 | GPT-4 Technical Report | | [PanGu-Σ](https://arxiv.org/abs/2303.10845v1) | 2023 | PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing | -| [BloombergGPT](https://arxiv.org/abs/2303.17564v1)| 2023 |BloombergGPT: A Large Language Model for Finance| \ No newline at end of file +| [BloombergGPT](https://arxiv.org/abs/2303.17564v1)| 2023 |BloombergGPT: A Large Language Model for Finance |