library_name: transformers
tags:
- code-generation
- C-programming
- LoRA
Model Card for Hend-10/MistralActia
This model is a fine-tuned version of the Mistral-7B model, adapted for generating C programming code. It leverages LoRA for improved performance in producing accurate and contextually relevant code snippets.
Model Details
Model Description
This model is fine-tuned using the Mistral-7B architecture with LoRA to enhance its capability in generating C programming code. The fine-tuning process involved training on a dataset containing C programming tasks to improve the model's ability to generate and assist with code-related queries.
- Developed by: [Hend Amri]
- Model type: Language Model (Causal LM)
- Finetuned from model: Mistral-7B
Model Sources [optional]
- Repository: Hugging Face Model Repository
Uses
Direct Use
This model can be used directly for generating code snippets, examples, and explanations in C programming. It is useful for developers, educators, and researchers who need assistance with C code generation.
Downstream Use [optional]
The model can be integrated into applications and tools that assist with coding tasks, such as IDE plugins, coding assistants, and educational platforms for programming.
Out-of-Scope Use
The model is not suitable for generating code in languages other than C or for tasks requiring domain-specific knowledge beyond general programming.
Bias, Risks, and Limitations
Bias
The model may exhibit biases present in the training data, such as preferences for certain coding styles or conventions. Users should review generated code for accuracy and appropriateness.
Limitations
The model may struggle with highly specialized or non-standard C programming tasks. It is also limited by the quality and diversity of the training data.
Risks
The model might generate code that is syntactically correct but functionally incorrect or insecure. Always validate and test the generated code thoroughly.
Recommendations
Users should verify and test all code generated by the model. It is recommended to use the model as an aid rather than a sole source of coding solutions.
How to Get Started with the Model
Use the following code snippet to get started with the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Hend-10/MistralActia"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Write a C program that defines an enum with bit flags and shows how to use bitwise operators to manipulate these flags."
inputs = tokenizer(prompt, return_tensors="pt")
# Generate text
outputs = model.generate(
**inputs,
max_length=len(inputs['input_ids'][0]) + 200, # Ensure the generated text doesn't exceed a certain length
eos_token_id=tokenizer.eos_token_id # Ensure the generation stops at the end of the sequence
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
The training data for this model consists of a dataset formatted as JSON Lines (JSONL). It includes diverse C programming prompts and responses that have been used to fine-tune the model for generating C code. The dataset is derived from various sources related to programming tasks and examples.
Training Procedure
Preprocessing
The data preprocessing involved:
- Loading the data from an Excel file and removing duplicates.
- Converting the cleaned data into JSONL format suitable for the
datasets
library. - Formatting each entry to fit the input-output structure required for training.
Training Hyperparameters
- Training regime: fp16 mixed precision (for efficiency and faster training)
- Batch Size: 2 per device
- Number of Epochs: 7
- Learning Rate: 2e-4
- Evaluation Strategy: Evaluate every 500 steps
- Save Strategy: Save every 500 steps
- Logging Steps: Log every 500 steps
- Gradient Accumulation Steps: 8
- Optimizer: paged_adamw_32bit
- Weight Decay: 0.001
- Gradient Clipping Threshold: 0.3
- Warmup Ratio: 0.03
- LR Scheduler Type: Constant
Speeds, Sizes, Times
- Training Time: Approximately 53 minutes and 1 second (3276.27 seconds) for the training process.
- Checkpoint Size: 16.7 MB
Evaluation
Testing Data, Factors & Metrics
Testing Data
The model was evaluated using a held-out test set derived from the same source as the training data. This set was not seen by the model during training and is used to assess the model's performance on unseen prompts.
Factors
Evaluation factors include:
- Prompt complexity
- Response relevance
- Code correctness
Metrics
- Perplexity: Measures how well the model predicts the next token.
- Accuracy: Checks if the generated code is syntactically correct and fulfills the prompt requirements.
Results
The model demonstrates strong performance in generating coherent and contextually appropriate C code snippets. However, its effectiveness can vary based on the complexity and specificity of the prompts.
Summary
The model performs well on a range of C programming tasks, though further fine-tuning and evaluation on more diverse datasets could enhance its robustness and applicability.
Model Examination [optional]
The model's interpretability is an ongoing area of research. Examination of model outputs includes analyzing the consistency and relevance of generated code snippets relative to the provided prompts.
Environmental Impact
Technical Specifications [optional]
Model Architecture and Objective
The model is based on the Mistral-7B architecture, designed for generating and completing text. It has been fine-tuned to generate C programming code, focusing on providing syntactically and semantically accurate outputs.
Compute Infrastructure
The model was trained using high-performance computing infrastructure provided by [Insert Cloud Provider or Hardware Details].
Hardware
- Type: GPUs/TPUs
- Model: Google Colab L4 GPU, Google Colab A100
Software
- Frameworks: Transformers, PyTorch
- Libraries: Datasets, Accelerate, PEFT, TRL
Citation [optional]
BibTeX:
@misc{hend2024mistralactia,
author = {Amri Hend},
title = {MistralActia: A Fine-Tuned Mistral Model for C Code Generation},
year = {2024},
url = {https://huggingface.co/Hend-10/MistralActia}
}
APA:
Hend, A. (2024). MistralActia: A Fine-Tuned Mistral Model for C Code Generation. Retrieved from https://huggingface.co/Hend-10/MistralActia
Glossary
- Perplexity: A measure of how well a probabilistic model predicts a sample.
- FP16 Mixed Precision: A type of precision that uses 16-bit floating point numbers to improve performance and reduce memory usage.
More Information
For more details about the model and to explore my other projects, visit my GitHub profile.
Model Card Authors
- Author: Amri Hend
Model Card Contact
For further inquiries, please contact hend.amri@esprit.tn or reach out via the Hugging Face Model Hub.