File size: 557 Bytes
0aa1d3f
5d50496
3086f33
 
 
 
 
 
 
 
0aa1d3f
 
3086f33
0aa1d3f
3086f33
0aa1d3f
3086f33
 
 
 
0aa1d3f
3086f33
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
tags:
- text-generation
- causal-lm
- transformers
- peft
library_name: transformers
model-index:
- name: Llama-3-8B Fine-tuned
  results: []
---

# Fine-Tuned Llama-3-8B Model

This model is a fine-tuned version of `NousResearch/Meta-Llama-3-8B` using LoRA and 8-bit quantization.

## Usage
To load the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ubiodee/Test_Plutus"
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)