valeriojob's picture
Update README.md
d16774d verified
---
base_model: unsloth/llama-3-8b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# flashcardsGPT-Llama3-8B-v0.1
- This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) based on real university lecture data.
- Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university ([more info](https://www.fhnw.ch/en/degree-programmes/business/bsc-in-business-information-technology)).
- This repo includes the default format of the model as well as the LoRA adapters of the model. There is a separate repo called [valeriojob/flashcardsGPT-Llama3-8B-v0.1-GGUF](https://huggingface.co/valeriojob/flashcardsGPT-Llama3-8B-v0.1-GGUF) that includes the quantized versions of this model in GGUF format.
- This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
## Model description
This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object.
It uses the following Prompt Engineering template:
"""
Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic.
Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example.
Ensure the 'back' field contains no line breaks.
No additional text or explanation should be provided—only respond with the JSON object.
Here is the OCR-extracted text:
""""
## Intended uses & limitations
The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university.
## Training and evaluation data
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/FHNW-Flashcards-Data-v0.1](https://huggingface.co/datasets/valeriojob/FHNW-Flashcards-Data-v0.1)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size = 2,
- gradient_accumulation_steps = 4,
- warmup_steps = 5,
- max_steps = 55, # increase this to make the model learn "better"
- num_train_epochs=4,
- learning_rate = 2e-4,
- fp16 = not torch.cuda.is_bf16_supported(),
- bf16 = torch.cuda.is_bf16_supported(),
- logging_steps = 1,
- optim = "adamw_8bit",
- weight_decay = 0.01,
- lr_scheduler_type = "linear",
- seed = 3407,
- output_dir = "outputs"
### Training results
| Training Loss | Step |
|:-------------:|:----:|
| 0.995000 | 1 |
| 0.775000 | 2 |
| 0.787500 | 3 |
| 0.712200 | 5 |
| 0.803800 | 10 |
| 0.624000 | 15 |
| 0.594800 | 20 |
| 0.383200 | 30 |
| 0.269200 | 40 |
| 0.234400 | 55 |
## Licenses
- **License:** apache-2.0