ayoolaolafenwa
commited on
Commit
•
4c013d9
1
Parent(s):
6acd909
Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,8 @@ license: apache-2.0
|
|
4 |
language:
|
5 |
- en
|
6 |
pipeline_tag: conversational
|
|
|
|
|
7 |
---
|
8 |
|
9 |
## ChatLM
|
@@ -24,7 +26,7 @@ model_path = "ayoolaolafenwa/ChatLM"
|
|
24 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
25 |
|
26 |
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code = True,
|
27 |
-
torch_dtype=torch.bfloat16)
|
28 |
|
29 |
prompt = "<user>: Give me a financial advise on investing in stocks. <chatbot>: "
|
30 |
|
@@ -132,5 +134,4 @@ Check the the modified dataset https://huggingface.co/datasets/ayoolaolafenwa/sf
|
|
132 |
|
133 |
ChatLM was supervised finetuned with pretrained [Falcon 1-Billion parameters model](https://huggingface.co/tiiuae/falcon-rw-1b) trained on 350-Billion tokens
|
134 |
of RefinedWeb. It was trained with a single H100 GPU for 1 epoch. It achieves Perplexity *1.738*. Check the full code for supervised finetune
|
135 |
-
training on its github repository https://github.com/ayoolaolafenwa/ChatLM/tree/main
|
136 |
-
|
|
|
4 |
language:
|
5 |
- en
|
6 |
pipeline_tag: conversational
|
7 |
+
datasets:
|
8 |
+
- ayoolaolafenwa/sft-data
|
9 |
---
|
10 |
|
11 |
## ChatLM
|
|
|
26 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
27 |
|
28 |
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code = True,
|
29 |
+
torch_dtype=torch.bfloat16).to("cuda")
|
30 |
|
31 |
prompt = "<user>: Give me a financial advise on investing in stocks. <chatbot>: "
|
32 |
|
|
|
134 |
|
135 |
ChatLM was supervised finetuned with pretrained [Falcon 1-Billion parameters model](https://huggingface.co/tiiuae/falcon-rw-1b) trained on 350-Billion tokens
|
136 |
of RefinedWeb. It was trained with a single H100 GPU for 1 epoch. It achieves Perplexity *1.738*. Check the full code for supervised finetune
|
137 |
+
training on its github repository https://github.com/ayoolaolafenwa/ChatLM/tree/main
|
|