File size: 2,967 Bytes
b16b10d a8252e1 b16b10d 67e48a7 b16b10d a8252e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/GOAT-AI/GOAT-7B-Community/edit/main/README.md).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
# Model description
- **Base Architecture:** LLaMA 2 7B flavour
- **Dataset size:** 72K multi-turn dialogues
- **License:** llama2
- **Context window length:** 4096 tokens
### Learn more
- **Blog:** https://www.blog.goat.ai/goat-7b-community-tops-among-7b-models/
- **Paper:** Coming soon
- **Demo:** https://3f3fb57083197123c8.gradio.live/
## Uses
The main purpose of GOAT-7B-Community is to facilitate research on large language models and chatbots. It is specifically designed for researchers and hobbyists working in the fields of natural language processing, machine learning, and artificial intelligence.
## Usage
Usage can be either self-hosted via `transformers` or used with Spaces
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "GOAT-7B-Community model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16
)
```
## Training dataset
Training dataset was collected from users conversations with GoatChat app and OpenAssistant. We will not release the dataset.
## Evaluation
GOAT-7B-Community model is evaluated against common metrics for evaluating language models, including MMLU and BigBench Hard. We still continue to evaluate all our models and will share details soon.
- **MMLU:** 49.31
- **BBH:** 35.7
## License
GOAT-7B-Community model is based on [Meta's LLaMA-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), and using own datasets.
GOAT-7B-Community model weights are available under LLAMA-2 license. Note that the GOAT-7B-Community model weights require access to the LLaMA-2 model weighs. The GOAT-7B-Community model is based on LLaMA-2 and should be used according to the LLaMA-2 license.
### Risks and Biases
GOAT-7B-Community model can produce factually incorrect output and should not be relied on to deliver factually accurate information. The model was trained on various private and public datasets. Therefore, the GOAT-7B-Community model could possibly generate wrong, biased, or otherwise offensive outputs. |