Abdulrhman37
commited on
Commit
•
dc5b99b
1
Parent(s):
41db25d
Update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,19 @@
|
|
1 |
-
---
|
2 |
-
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
|
3 |
-
tags:
|
4 |
-
- text-generation-inference
|
5 |
-
- transformers
|
6 |
-
- unsloth
|
7 |
-
- llama
|
8 |
-
- trl
|
9 |
-
license: apache-2.0
|
10 |
-
language:
|
11 |
-
- en
|
12 |
-
- ar
|
13 |
-
datasets:
|
14 |
-
- Abdulrhman37/metallurgy-qa
|
15 |
-
pipeline_tag: text2text-generation
|
16 |
-
---
|
17 |
|
18 |
# Fine-Tuned Llama Model for Metallurgy and Materials Science
|
19 |
|
@@ -51,22 +51,15 @@ Fine-tuning focused on enhancing domain-specific knowledge using a dataset curat
|
|
51 |
|
52 |
## 📦 How to Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
```python
|
57 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
58 |
-
|
59 |
-
tokenizer = AutoTokenizer.from_pretrained("Abdulrhman37/metallurgy-llama")
|
60 |
-
model = AutoModelForCausalLM.from_pretrained("Abdulrhman37/metallurgy-llama")
|
61 |
-
|
62 |
-
# Example Query
|
63 |
-
prompt = "Explain the role of manganese in Mg-Al-Mn systems."
|
64 |
-
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
65 |
-
outputs = model.generate(**inputs, max_new_tokens=150)
|
66 |
-
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
67 |
|
68 |
-
|
|
|
69 |
|
|
|
|
|
|
|
|
|
70 |
|
71 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
72 |
|
|
|
1 |
+
---
|
2 |
+
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
|
3 |
+
tags:
|
4 |
+
- text-generation-inference
|
5 |
+
- transformers
|
6 |
+
- unsloth
|
7 |
+
- llama
|
8 |
+
- trl
|
9 |
+
license: apache-2.0
|
10 |
+
language:
|
11 |
+
- en
|
12 |
+
- ar
|
13 |
+
datasets:
|
14 |
+
- Abdulrhman37/metallurgy-qa
|
15 |
+
pipeline_tag: text2text-generation
|
16 |
+
---
|
17 |
|
18 |
# Fine-Tuned Llama Model for Metallurgy and Materials Science
|
19 |
|
|
|
51 |
|
52 |
## 📦 How to Use
|
53 |
|
54 |
+
follow this [notebook](https://colab.research.google.com/drive/1pRNcAtybNF6w6mE1ZReFwfrIujZ5_t4S#scrollTo=wk4fCWOl0Ocd) for help to use the model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
+
## 📧 Contact
|
57 |
+
For any inquiries, feedback, or collaboration opportunities, feel free to reach out:
|
58 |
|
59 |
+
- Email: [abdodebo3@gmail.com](mailto:abdodebo3@gmail.com)
|
60 |
+
- [LinkedIn](https://www.linkedin.com/in/abdulrahman-eldeeb-8b4621253/)
|
61 |
+
- [GitHub](https://github.com/AdbulrhmanEldeeb)
|
62 |
+
- Phone: +20 1026821545
|
63 |
|
64 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
65 |
|