File size: 2,796 Bytes
41db25d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0780d2
41db25d
 
 
 
 
 
 
 
 
e0780d2
 
41db25d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0780d2
 
 
 
41db25d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---

base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
- ar
datasets:
- Abdulrhman37/metallurgy-qa
pipeline_tag: text2text-generation
---


# Fine-Tuned Llama Model for Metallurgy and Materials Science

- **Developed by:** Abdulrhman37  
- **License:** [Apache-2.0](https://opensource.org/licenses/Apache-2.0)  
- **Base Model:** [unsloth/meta-llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-bnb-4bit)  

This fine-tuned Llama model specializes in **metallurgy, materials science, and engineering**. It has been enhanced to provide precise and detailed responses to technical queries, making it a valuable tool for professionals, researchers, and enthusiasts in the field.

---

## ๐Ÿ› ๏ธ Training Details

This model was fine-tuned with:
- **[Unsloth](https://github.com/unslothai/unsloth):** Enabled 2x faster training using efficient parameter optimization.  
- **[Hugging Face TRL](https://huggingface.co/transformers/main_classes/trainer.html):** Used for advanced fine-tuning and training capabilities.

Fine-tuning focused on enhancing domain-specific knowledge using a dataset curated from various metallurgical research and practical case studies.

---

## ๐Ÿ”‘ Features
- Supports **text generation** with scientific and technical insights.
- Provides **domain-specific reasoning** with references to key metallurgical principles and mechanisms.
- Built for fast inference with **bnb-4bit quantization** for optimized performance.

---

## ๐ŸŒŸ Example Use Cases
- **Material property analysis** (e.g., "How does adding rare earth elements affect magnesium alloys?").  
- **Failure mechanism exploration** (e.g., "What causes porosity in gas metal arc welding?").  
- **Corrosion prevention methods** (e.g., "How does cathodic protection work in marine environments?").

---

## ๐Ÿ“ฆ How to Use

You can load the model using the `transformers` library:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Abdulrhman37/metallurgy-llama")
model = AutoModelForCausalLM.from_pretrained("Abdulrhman37/metallurgy-llama")

# Example Query
prompt = "Explain the role of manganese in Mg-Al-Mn systems."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=150)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)


This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)