From ce4dc2e789c5a74c56473928f416aa5f2725ce62 Mon Sep 17 00:00:00 2001 From: CharlesCNorton <135471798+CharlesCNorton@users.noreply.github.com> Date: Tue, 9 Jul 2024 11:19:04 -0400 Subject: [PATCH] typo in training log documentation (#2452) Corrected a typo in the training log documentation where "seemded" was changed to "seemed". This enhances the readability and professionalism of the document. Signed-off-by: CharlesCNorton <135471798+CharlesCNorton@users.noreply.github.com> --- gpt4all-training/TRAINING_LOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gpt4all-training/TRAINING_LOG.md b/gpt4all-training/TRAINING_LOG.md index f86838c2..2433175c 100644 --- a/gpt4all-training/TRAINING_LOG.md +++ b/gpt4all-training/TRAINING_LOG.md @@ -247,7 +247,7 @@ We trained multiple [GPT-J models](https://huggingface.co/EleutherAI/gpt-j-6b) w We release the checkpoint after epoch 1. -Using Atlas, we extracted the embeddings of each point in the dataset and calculated the loss per sequence. We then uploaded [this to Atlas](https://atlas.nomic.ai/map/gpt4all-j-post-epoch-1-embeddings) and noticed that the higher loss items seem to cluster. On further inspection, the highest density clusters seemded to be of prompt/response pairs that asked for creative-like generations such as `Generate a story about ...` ![](figs/clustering_overfit.png) +Using Atlas, we extracted the embeddings of each point in the dataset and calculated the loss per sequence. We then uploaded [this to Atlas](https://atlas.nomic.ai/map/gpt4all-j-post-epoch-1-embeddings) and noticed that the higher loss items seem to cluster. On further inspection, the highest density clusters seemed to be of prompt/response pairs that asked for creative-like generations such as `Generate a story about ...` ![](figs/clustering_overfit.png)