anamikac2708
commited on
Commit
•
c26830a
1
Parent(s):
bbf1e3d
Update README.md
Browse files
README.md
CHANGED
@@ -26,19 +26,20 @@ This project is for research purposes only. Third-party datasets may be subject
|
|
26 |
## How to Get Started with the Model
|
27 |
|
28 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
29 |
-
You
|
30 |
Please find an example below using Unsloth:
|
31 |
|
32 |
-
```
|
33 |
import torch
|
34 |
from unsloth import FastLanguageModel
|
35 |
from transformers import AutoTokenizer, pipeline
|
36 |
model_id='FinLang/investopedia_chat_model'
|
37 |
max_seq_length=2048
|
38 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
39 |
-
model_name = "anamikac2708/Llama3-8b-finetuned-investopedia-
|
40 |
max_seq_length = max_seq_length,
|
41 |
-
dtype = torch.bfloat16
|
|
|
42 |
)
|
43 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
44 |
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
|
|
|
26 |
## How to Get Started with the Model
|
27 |
|
28 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
29 |
+
You can infer the adapters directly using Peft/Unsloth library or you can merge the adapter with the base model and can use it.
|
30 |
Please find an example below using Unsloth:
|
31 |
|
32 |
+
```python
|
33 |
import torch
|
34 |
from unsloth import FastLanguageModel
|
35 |
from transformers import AutoTokenizer, pipeline
|
36 |
model_id='FinLang/investopedia_chat_model'
|
37 |
max_seq_length=2048
|
38 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
39 |
+
model_name = "anamikac2708/Llama3-8b-finetuned-investopedia-Lora-Adapters", # YOUR MODEL YOU USED FOR TRAINING
|
40 |
max_seq_length = max_seq_length,
|
41 |
+
dtype = torch.bfloat16,
|
42 |
+
load_in_4bit = False #Make it True if you want to use bitsandbytes 4bit
|
43 |
)
|
44 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
45 |
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
|