metadata
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
- ar
datasets:
- Abdulrhman37/metallurgy-qa
pipeline_tag: text2text-generation
Fine-Tuned Llama Model for Metallurgy and Materials Science
- Developed by: Abdulrhman37
- License: Apache-2.0
- Base Model: unsloth/meta-llama-3.1-8b-bnb-4bit
This fine-tuned Llama model specializes in metallurgy, materials science, and engineering. It has been enhanced to provide precise and detailed responses to technical queries, making it a valuable tool for professionals, researchers, and enthusiasts in the field.
π οΈ Training Details
This model was fine-tuned with:
- Unsloth: Enabled 2x faster training using efficient parameter optimization.
- Hugging Face TRL: Used for advanced fine-tuning and training capabilities.
Fine-tuning focused on enhancing domain-specific knowledge using a dataset curated from various metallurgical research and practical case studies.
π Features
- Supports text generation with scientific and technical insights.
- Provides domain-specific reasoning with references to key metallurgical principles and mechanisms.
- Built for fast inference with bnb-4bit quantization for optimized performance.
π Example Use Cases
- Material property analysis (e.g., "How does adding rare earth elements affect magnesium alloys?").
- Failure mechanism exploration (e.g., "What causes porosity in gas metal arc welding?").
- Corrosion prevention methods (e.g., "How does cathodic protection work in marine environments?").
π¦ How to Use
You can load the model using the transformers
library:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Abdulrhman37/metallurgy-llama")
model = AutoModelForCausalLM.from_pretrained("Abdulrhman37/metallurgy-llama")
# Example Query
prompt = "Explain the role of manganese in Mg-Al-Mn systems."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=150)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)