Edit model card
Refuel.ai

Quantization Description

This repo contains GGUF quantized versions of the Refuel Ai Llama 3 Refueled . The model is supplied in different quantizations so that you can see what works best on the hardware you would like to run it on.

The repo contains quantizations in the following types:

  • Q4_0
  • Q4_1
  • Q4_K
  • Q4_K_S
  • Q4_K_M
  • Q5_0
  • Q5_1
  • Q5_K
  • Q5_K_M
  • Q5_K_S
  • Q6_K
  • Q8_0
  • Q2_K
  • Q3_K
  • Q3_K_S
  • Q3_K_XS

Model Details

RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of.

Model developers - Refuel AI

Input - Text only.

Output - Text only.

Architecture - Llama-3-Refueled is built on top of Llama-3-8B-instruct which is an auto-regressive language model that uses an optimized transformer architecture.

Release Date - May 8, 2024.

License - CC BY-NC 4.0


## Training Data

The model was both trained on over 4 Billion tokens, spanning 2750+ NLP tasks. Our training collection consists majorly of:
1. Human annotated datasets like Flan, Task Source, and the Aya collection 
2. Synthetic datasets like OpenOrca, OpenHermes and WizardLM
3. Proprietary datasets developed or licensed by Refuel AI

## Benchmarks 

In this section, we report the results for Refuel models on our benchmark of labeling tasks. For details on the methodology see [here](https://refuel.ai/blog-posts/announcing-refuel-llm-2).

<table>
<tr></tr>
<tr><th>Provider</th><th>Model</th><th colspan="4" style="text-align: center">LLM Output Quality (by task type)</tr>
<tr><td></td><td></td><td>Overall</td><td>Classification</td><td>Reading Comprehension</td><td>Structure Extraction</td><td>Entity Matching</td><td></td></tr>
<tr><td>Refuel</td><td>RefuelLLM-2</td><td>83.82%</td><td>84.94%</td><td>76.03%</td><td>88.16%</td><td>92.00%</td><td></td></tr>
<tr><td>OpenAI</td><td>GPT-4-Turbo</td><td>80.88%</td><td>81.77%</td><td>72.08%</td><td>84.79%</td><td>97.20%</td><td></td></tr>
<tr><td>Refuel</td><td>RefuelLLM-2-small (Llama-3-Refueled)</td><td>79.67%</td><td>81.72%</td><td>70.04%</td><td>84.28%</td><td>92.00%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Opus</td><td>79.19%</td><td>82.49%</td><td>67.30%</td><td>88.25%</td><td>94.96%</td><td></td></tr>
<tr><td>Meta</td><td>Llama3-70B-Instruct</td><td>78.20%</td><td>79.38%</td><td>66.03%</td><td>85.96%</td><td>94.13%</td><td></td></tr>
<tr><td>Google</td><td>Gemini-1.5-Pro</td><td>74.59%</td><td>73.52%</td><td>60.67%</td><td>84.27%</td><td>98.48%</td><td></td></tr>
<tr><td>Mistral</td><td>Mixtral-8x7B-Instruct</td><td>62.87%</td><td>79.11%</td><td>45.56%</td><td>47.08%</td><td>86.52%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Sonnet</td><td>70.99%</td><td>79.91%</td><td>45.44%</td><td>78.10%</td><td>96.34%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Haiku</td><td>69.23%</td><td>77.27%</td><td>50.19%</td><td>84.97%</td><td>54.08%</td><td></td></tr>
<tr><td>OpenAI</td><td>GPT-3.5-Turbo</td><td>68.13%</td><td>74.39%</td><td>53.21%</td><td>69.40%</td><td>80.41%</td><td></td></tr>
<tr><td>Meta</td><td>Llama3-8B-Instruct</td><td>62.30%</td><td>68.52%</td><td>49.16%</td><td>65.09%</td><td>63.61%</td><td></td></tr>
</table>


## Limitations

The Llama-3-Refueled does not have any moderation mechanisms. We're looking forward to engaging with the community 
on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
Downloads last month
299
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Collection including thesven/Llama-3-Refueled-GGUF