File size: 2,359 Bytes
2b56609
 
 
 
 
 
 
 
3a840be
 
66ba068
2b56609
 
cf45727
2b56609
 
3a840be
2b56609
284daea
a20059a
3a840be
 
 
 
7a58c93
3a840be
 
 
 
 
 
 
 
01eca1b
3a840be
 
 
 
2b56609
d6e47cf
2b56609
 
3a840be
 
a0f8667
 
01eca1b
3a840be
2b56609
cf45727
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- Roblox
- Luau
license: llama3.1
language:
- en
pipeline_tag: text-generation
---

![BY_PINKSTACK.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2xMulpuSlZ3C1vpGgsAYi.png)

⚠️ It sometimes doesn't embed roblox related code, aka it sometimes doesn't put three ('), in your prompt make sure to tell it "When creating roblox code make it like this: '''lua (code) ''' "
⚠️ It has been found to hallucinate a lot, this will be fixed in the V2 version, please report any issues in the community tab
# 🤖 Which quant is right for you? 

- ***Q4:*** This model should be used on edge devices like phones or older laptops due to its compact size, quality is okay but fully usable. 
- ***Q5:*** This model should be used on most medium range devices like a rtx 2070 super, good quality and fast responses. 
- ***Q8:*** This model should be used on most modern high end devices like an rtx 3080, Responses are very high quality, but its slower than q5. 
- ***F16:*** This model should be used for testing and evaluating the model, server GPU needed to run it quickly. 

## Things you should be aware of when using PGAM models (Pinkstack General Accuracy Models) 🤖

This PGAM is based on Meta Llama 3.1 8B which we've given extra roblox LuaU training parameters so it would have similar outputs to the roblox ai documentation assistant, We trained using [this](mahiatlinux/luau_corpus-ShareGPT-for-EDM) dataset. Which is based on Roblox/luau_corpus 


To use this model, you must use a service which supports the GGUF file format.
Additionaly, it uses the llama-3.1 template.

Highly recommended to use with a system prompt. 

# Extra information
- **Developed by:** Pinkstack
- **License:** llama 3.1 community license
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit

This model was trained using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

This model is not affiliated in any way with the Roblox company.

Used this model? Don't forget to leave a like :) 💖


[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)