English
zpn meg HF staff commited on
Commit
23bb76d
1 Parent(s): 5ba998b

Small correction on number of epochs. (#3)

Browse files

- Small correction on number of epochs. (3cd745afa7d914fb3a24dca6274ae336d6889cfa)


Co-authored-by: Margaret Mitchell <meg@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,10 +8,10 @@ language:
8
 
9
  # gpt4all-lora-epoch-3
10
 
11
- This is an intermediate (epoch 3 / 4) checkpoint from `nomic-ai/gpt4all-lora`.
12
 
13
  An autoregressive transformer trained on [data](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) curated using [Atlas](https://atlas.nomic.ai/).
14
- This model is trained with four full epochs of training, while the related [gpt4all-lora-epoch-2 model](https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2) is trained with three.
15
  Replication instructions and data: [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
16
 
17
  ## Model Details
 
8
 
9
  # gpt4all-lora-epoch-3
10
 
11
+ This is an intermediate (epoch 3 / 4) checkpoint from [`nomic-ai/gpt4all-lora`](https://huggingface.co/nomic-ai/gpt4all-lora).
12
 
13
  An autoregressive transformer trained on [data](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) curated using [Atlas](https://atlas.nomic.ai/).
14
+ This model is trained with three epochs of training, while the related [gpt4all-lora model](https://huggingface.co/nomic-ai/gpt4all-lora) is trained with four.
15
  Replication instructions and data: [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
16
 
17
  ## Model Details