Abdulrhman37 commited on
Commit
41db25d
โ€ข
1 Parent(s): 5a36899

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -15
README.md CHANGED
@@ -1,22 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
3
- tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - llama
8
- - trl
9
- license: apache-2.0
10
- language:
11
- - en
12
  ---
13
 
14
- # Uploaded model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- - **Developed by:** Abdulrhman37
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
19
 
20
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
+ ---
2
+ base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - llama
8
+ - trl
9
+ license: apache-2.0
10
+ language:
11
+ - en
12
+ - ar
13
+ datasets:
14
+ - Abdulrhman37/metallurgy-qa
15
+ pipeline_tag: text2text-generation
16
+ ---
17
+
18
+ # Fine-Tuned Llama Model for Metallurgy and Materials Science
19
+
20
+ - **Developed by:** Abdulrhman37
21
+ - **License:** [Apache-2.0](https://opensource.org/licenses/Apache-2.0)
22
+ - **Base Model:** [unsloth/meta-llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-bnb-4bit)
23
+
24
+ This fine-tuned Llama model specializes in **metallurgy, materials science, and engineering**. It has been enhanced to provide precise and detailed responses to technical queries, making it a valuable tool for professionals, researchers, and enthusiasts in the field.
25
+
26
  ---
27
+
28
+ ## ๐Ÿ› ๏ธ Training Details
29
+
30
+ This model was fine-tuned with:
31
+ - **[Unsloth](https://github.com/unslothai/unsloth):** Enabled 2x faster training using efficient parameter optimization.
32
+ - **[Hugging Face TRL](https://huggingface.co/transformers/main_classes/trainer.html):** Used for advanced fine-tuning and training capabilities.
33
+
34
+ Fine-tuning focused on enhancing domain-specific knowledge using a dataset curated from various metallurgical research and practical case studies.
35
+
 
36
  ---
37
 
38
+ ## ๐Ÿ”‘ Features
39
+ - Supports **text generation** with scientific and technical insights.
40
+ - Provides **domain-specific reasoning** with references to key metallurgical principles and mechanisms.
41
+ - Built for fast inference with **bnb-4bit quantization** for optimized performance.
42
+
43
+ ---
44
+
45
+ ## ๐ŸŒŸ Example Use Cases
46
+ - **Material property analysis** (e.g., "How does adding rare earth elements affect magnesium alloys?").
47
+ - **Failure mechanism exploration** (e.g., "What causes porosity in gas metal arc welding?").
48
+ - **Corrosion prevention methods** (e.g., "How does cathodic protection work in marine environments?").
49
+
50
+ ---
51
+
52
+ ## ๐Ÿ“ฆ How to Use
53
+
54
+ You can load the model using the `transformers` library:
55
+
56
+ ```python
57
+ from transformers import AutoTokenizer, AutoModelForCausalLM
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained("Abdulrhman37/metallurgy-llama")
60
+ model = AutoModelForCausalLM.from_pretrained("Abdulrhman37/metallurgy-llama")
61
+
62
+ # Example Query
63
+ prompt = "Explain the role of manganese in Mg-Al-Mn systems."
64
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
65
+ outputs = model.generate(**inputs, max_new_tokens=150)
66
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
67
+
68
+ print(response)
69
 
 
 
 
70
 
71
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
72
 
73
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)