--- language: - fr license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - summarizer - lora base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded as lora model - **Developed by:** Labagaite - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit # Training Logs ## Summary metrics ### Best ROUGE-1 score : **0.9842446709916588** ### Best ROUGE-2 score : **0.9842154131847726** ### Best ROUGE-L score : **0.9842446709916588** ## Wandb logs You can view the training logs [](https://wandb.ai/william-derue/LLM-summarizer_trainer/runs/s9xqw6o8). ## Training details ### training data - Dataset : [fr-summarizer-dataset](https://huggingface.co/datasets/Labagaite/fr-summarizer-dataset) - Data-size : 7.65 MB - train : 1.97k rows - validation : 440 rows - roles : user , assistant - Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
*French audio podcast transcription* # Project details [](https://github.com/WillIsback/Report_Maker) Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data. The model will be used for an AI application: [Report Maker](https://github.com/WillIsback/Report_Maker) wich is a powerful tool designed to automate the process of transcribing and summarizing meetings. It leverages state-of-the-art machine learning models to provide detailed and accurate reports. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)