From 9a5b2f718343d6e8be0c208fc4353b04508b6f0d Mon Sep 17 00:00:00 2001 From: Bhanu Date: Thu, 14 Sep 2023 13:46:31 -0400 Subject: [PATCH] Update collection.en.mdx Add falcon 180B model and fix 7B link --- pages/models/collection.en.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/models/collection.en.mdx b/pages/models/collection.en.mdx index 3a80d5d..7940cf4 100644 --- a/pages/models/collection.en.mdx +++ b/pages/models/collection.en.mdx @@ -8,7 +8,7 @@ This section consists of a collection and summary of notable and foundational LL | Model | Release Date | Size (B) | Checkpoints | Description | | --- | --- | --- | --- | --- | -| [Falcon LLM](https://falconllm.tii.ae/) | May 2023 | 7, 40 | [Falcon-7B](https://huggingface.co/tiiuae), [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | Falcon LLM is a foundational large language model (LLM) with 40 billion parameters trained on one trillion tokens. TII has now released Falcon LLM – a 40B model. | +| [Falcon LLM](https://falconllm.tii.ae/) | Sep 2023 | 7, 40, 180 | [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b), [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B) | Falcon LLM is a foundational large language model (LLM) with 180 billion parameters trained on 3500 Billion tokens. TII has now released Falcon LLM – a 180B model. | | [PaLM 2](https://arxiv.org/abs/2305.10403) | May 2023 | - | - | A Language Model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. | | [Med-PaLM 2](https://arxiv.org/abs/2305.09617v1) | May 2023 | - | - | Towards Expert-Level Medical Question Answering with Large Language Models | | [Gorilla](https://arxiv.org/abs/2305.15334v1) | May 2023 | 7 | [Gorilla](https://github.com/ShishirPatil/gorilla) | Gorilla: Large Language Model Connected with Massive APIs | @@ -83,4 +83,4 @@ This section consists of a collection and summary of notable and foundational LL This section is under development. -Data adopted from [Papers with Code](https://paperswithcode.com/methods/category/language-models) and the recent work by [Zhao et al. (2023)](https://arxiv.org/pdf/2303.18223.pdf). \ No newline at end of file +Data adopted from [Papers with Code](https://paperswithcode.com/methods/category/language-models) and the recent work by [Zhao et al. (2023)](https://arxiv.org/pdf/2303.18223.pdf).