valeriojob commited on
Commit
b460682
1 Parent(s): 0f0d141

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -8
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: unsloth/qwen2-7b-bnb-4bit
3
  language:
4
  - en
5
  license: apache-2.0
@@ -7,16 +7,38 @@ tags:
7
  - text-generation-inference
8
  - transformers
9
  - unsloth
10
- - qwen2
11
  - gguf
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** valeriojob
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/qwen2-7b-bnb-4bit
 
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: unsloth/Qwen2-7B
3
  language:
4
  - en
5
  license: apache-2.0
 
7
  - text-generation-inference
8
  - transformers
9
  - unsloth
10
+ - llama
11
  - gguf
12
  ---
13
 
14
+ # flashcardsGPT-Qwen2-7B-v0.1-GGUF
15
 
16
+ - This model is a fine-tuned version of [unsloth/Qwen2-7b](https://huggingface.co/unsloth/Qwen2-7b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) based on real university lecture data.
17
+ - Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university ([more info](https://www.fhnw.ch/en/degree-programmes/business/bsc-in-business-information-technology)).
18
+ - This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/flashcardsGPT-Qwen2-7B-v0.1](https://huggingface.co/valeriojob/flashcardsGPT-Qwen2-7B-v0.1) that includes the default format of the model as well as the LoRA adapters of the model.
19
+ - This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
20
 
21
+ ## Model description
22
 
23
+ This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object.
24
+ It uses the following Prompt Engineering template:
25
+
26
+ """
27
+ Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic.
28
+ Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example.
29
+ Ensure the 'back' field contains no line breaks.
30
+ No additional text or explanation should be provided—only respond with the JSON object.
31
+
32
+ Here is the OCR-extracted text:
33
+ """"
34
+
35
+ ## Intended uses & limitations
36
+
37
+ The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university.
38
+
39
+ ## Training and evaluation data
40
+
41
+ The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/FHNW-Flashcards-Data-v0.1](https://huggingface.co/datasets/valeriojob/FHNW-Flashcards-Data-v0.1)
42
+
43
+ ## Licenses
44
+ - **License:** apache-2.0