diff --git a/pages/models/llama.tr.mdx b/pages/models/llama.tr.mdx index 11624b0..ae1199a 100644 --- a/pages/models/llama.tr.mdx +++ b/pages/models/llama.tr.mdx @@ -34,10 +34,10 @@ Genel olarak LLaMA-13B, 10 kat daha küçük olmasına ve tek bir GPU çalışt ## Referanslar -- [Koala: A Dialogue Model for Academic Research](https://bair.berkeley.edu/blog/2023/04/03/koala/) (April 2023) -- [Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data](https://arxiv.org/abs/2304.01196) (April 2023) -- [Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality](https://vicuna.lmsys.org/) (March 2023) -- [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention](https://arxiv.org/abs/2303.16199) (March 2023) -- [GPT4All](https://github.com/nomic-ai/gpt4all) (March 2023) -- [ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge](https://arxiv.org/abs/2303.14070) (March 2023) -- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) (March 2023) \ No newline at end of file +- [Koala: A Dialogue Model for Academic Research](https://bair.berkeley.edu/blog/2023/04/03/koala/) (Nisan 2023) +- [Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data](https://arxiv.org/abs/2304.01196) (Nisan 2023) +- [Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality](https://vicuna.lmsys.org/) (Mart 2023) +- [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention](https://arxiv.org/abs/2303.16199) (Mart 2023) +- [GPT4All](https://github.com/nomic-ai/gpt4all) (Mart 2023) +- [ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge](https://arxiv.org/abs/2303.14070) (Mart 2023) +- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) (Mart 2023)