English
File size: 1,385 Bytes
e4153df
 
 
 
 
 
 
1c75319
c572670
 
1c75319
bcf5a1e
c572670
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: gpl-3.0
datasets:
- nomic-ai/gpt4all_prompt_generations
language:
- en
---

# gpt4all-lora

An autoregressive transformer trained on [data](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) curated using [Atlas](https://atlas.nomic.ai/).
This model is trained with four full epochs of training, while the related [gpt4all-lora-epoch-3 model](https://huggingface.co/nomic-ai/gpt4all-lora-epoch-3) is trained with three.
Replication instructions and data: [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)

## Model Details
### Model Description

**Developed by:** [Nomic AI](https://home.nomic.ai)

**Model Type:** An auto-regressive language model based on the transformer architecture and fine-tuned.

**Languages:** English

**License:** [GPL-3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)

**Finetuned from:** [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md)

### Model Sources
**Repository:**  [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)

**Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama)

**Technical Report:** [GPT4All: Training an Assistant-style Chatbot with Large Scale Data
Distillation from GPT-3.5-Turbo](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf)