File size: 1,704 Bytes
42c981b
 
 
 
 
 
 
 
 
 
 
 
bc6ffec
 
be35232
42c981b
 
b2d2f8c
42c981b
 
 
91a9970
32fd16e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f00267
42c981b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: llama3
language:
- en
library_name: transformers
tags:
- code
- lizardcoder
- llama3
- llama
- merge
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6530994e70a88b63f007324d/_H6JOLV3eKFeUYiLHqHlI.png)
# Llama-3-LizardCoder-8B
This is a merge of 6 models that were finetuned on llama3 8b. This has done pretty decent on some coding tasks, for the parameter size. 

[gguf](https://huggingface.co/Walmart-the-bag/LizardCoder-Llama3-8B-GGUF)
## Limitations
- **Uncertain Accuracy:** As a merged model, the model's responses may not always be accurate. Users should independently verify any outputs before relying on them.
- **Potential for Censorship:** The model's censorship filters are not comprehensive. There is a possibility of encountering censored code/content.
- **Not including packages:** If you ask it to code you something, it may accidentally forget to include a package. Tell it to, and create a good prompt. This will be finetuned on to fix it in the future.
# Merge Config
This model was made possible by this merge yaml.
```yaml
models:
  - model: rombodawg/Llama-3-8B-Instruct-Coder
    parameters:
      weight: 1.0
  - model: ajibawa-2023/Code-Llama-3-8B
    parameters:
      weight: 0.3
  - model: meta-llama/Meta-Llama-3-8B-Instruct
    parameters:
      weight: 0.5
  - model: Orenguteng/Llama-3-8B-Lexi-Uncensored
    parameters:
      weight: 0.8
  - model: TheSkullery/llama-3-cat-8b-instruct-v1
    parameters:
      weight: 0.9
  - model: McGill-NLP/Llama-3-8B-Web
    parameters:
      weight: 0.2

merge_method: linear
dtype: bfloat16
```
## License
i dont really care about this, but here: [Llama3](https://llama.meta.com/llama3/license/)