YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
# Mistral Fine-tuned for Technical Documentation
This model is a fine-tuned version of Mistral-7B optimized for generating technical documentation.
## Model Details
- Base model: mistralai/mistral-7b-v0.1
- Fine-tuning: LoRA adaptation
- Training data: Technical documentation from PyTorch, TensorFlow, scikit-learn, and pandas
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("goaiguru/mistral-technical-docs")
tokenizer = AutoTokenizer.from_pretrained("goaiguru/mistral-technical-docs")
```
## Training Details
- LoRA rank: 16
- LoRA alpha: 32
- Training epochs: 3
- Batch size: 4
- Learning rate: 2e-4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.