File size: 3,207 Bytes
1e291cc
2fbd858
1e291cc
 
 
 
 
 
 
 
 
dd33716
1e291cc
 
0081214
1e291cc
2fbd858
c448924
a36fa10
 
1e291cc
a22aeec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e291cc
9172b70
 
 
 
 
 
 
 
 
 
 
 
 
20b09d7
a36fa10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
base_model: unsloth/llama-3-8b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---

# flashcardsGPT-Llama3-8B-v0.1

- This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) based on real university lecture data.
- Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university ([more info](https://www.fhnw.ch/en/degree-programmes/business/bsc-in-business-information-technology)).
- This repo includes the default format of the model as well as the LoRA adapters of the model. There is a separate repo called [valeriojob/flashcardsGPT-Llama3-8B-v0.1-GGUF](https://huggingface.co/valeriojob/flashcardsGPT-Llama3-8B-v0.1-GGUF) that includes the quantized versions of this model in GGUF format.
- This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

## Model description

This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object.
It uses the following Prompt Engineering template:

"""
Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic.
Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example.
Ensure the 'back' field contains no line breaks.
No additional text or explanation should be provided—only respond with the JSON object.

Here is the OCR-extracted text:
""""

## Intended uses & limitations

The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university.

## Training and evaluation data

The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/FHNW-Flashcards-Data-TSAR](https://huggingface.co/datasets/valeriojob/FHNW-Flashcards-Data-TSAR)

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- per_device_train_batch_size = 2,
- gradient_accumulation_steps = 4,
- warmup_steps = 5,
- max_steps = 55, # increase this to make the model learn "better"
- num_train_epochs=4,
- learning_rate = 2e-4,
- fp16 = not torch.cuda.is_bf16_supported(),
- bf16 = torch.cuda.is_bf16_supported(),
- logging_steps = 1,
- optim = "adamw_8bit",
- weight_decay = 0.01,
- lr_scheduler_type = "linear",
- seed = 3407,
- output_dir = "outputs"

### Training results

| Training Loss | Step |
|:-------------:|:----:|
| 0.995000      | 1    |
| 0.775000      | 2    |
| 0.787500      | 3    |
| 0.712200      | 5    |
| 0.803800      | 10   |
| 0.624000      | 15   |
| 0.594800      | 20   |
| 0.383200      | 30   |
| 0.269200      | 40   |
| 0.234400      | 55   |

## Licenses
- **License:** apache-2.0