YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model Card for Fine-Tuned Mistral 7B

Model Details

Model Name: Fine-Tuned Mistral 7B (Security Exploit Code Generation)
Developed by: Yamini
Funded by [optional]: Self-funded
Shared by [optional]: Yamini
Model Type: Causal Language Model (Decoder-only)
Language(s): English (Code-focused)
License: Apache 2.0
Finetuned from model: Mistral 7B

Uses

Direct Use

This model is fine-tuned for generating skin care recommendations.

How to Get Started with the Model

Use the code below to load and interact with the model:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Yaminii/finetuned-mistral")
tokenizer = AutoTokenizer.from_pretrained("Yaminii/finetuned-mistral")

input_text = "<insert vulnerable code snippet>"
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_length=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Training Details

Training Data

The model was fine-tuned on Skincare dataset from Kaggle.

Training Procedure

Preprocessing:

  • Tokenization with Mistral’s tokenizer
  • Dataset cleaning and augmentation

Training Hyperparameters:

  • Base Model: Mistral 7B
  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Optimizer: AdamW
  • Learning rate: 2e-5
  • Batch size: 8
  • Gradient accumulation steps: 4

Speeds, Sizes, Times [optional]

  • Training Time: ~12 hours on 4 A100 GPUs
  • Final Checkpoint: ./fine-tuned-mistral-merged

Environmental Impact

Using the Machine Learning Impact calculator (Lacoste et al., 2019), estimated carbon emissions are:

  • Hardware Type: 4x A100 GPUs
  • Hours Used: ~12 hours
  • Cloud Provider: AWS
  • Compute Region: US-East
  • Carbon Emitted: [More Information Needed]

Technical Specifications

Model Architecture and Objective

  • Transformer-based autoregressive model (decoder-only)
  • Supports token-level and sentence-level embeddings
  • Fine-tuned for security attack code generation

Compute Infrastructure

Hardware: 4x A100 GPUs
Software: PyTorch, Hugging Face Transformers, LoRA fine-tuning

Citation

BibTeX:

@article{yamini2025finetunedmistral,
  author    = {Yamini},
  title     = {Fine-Tuned Mistral 7B for Skincare Recommendations},
  year      = {2025},
  journal   = {GitHub Repository},
  url       = {https://huggingface.co/Yaminii/finetuned-mistral}
}

Model Card Authors

Yamini

Model Card Contact

For questions or concerns, contact Yamini via Hugging Face or GitHub.

Downloads last month
36
Safetensors
Model size
7.25B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Space using Yaminii/finetuned-mistral 1