carecodeconnect commited on
Commit
b5d308d
1 Parent(s): 8ba9592

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
9
+ model-index:
10
+ - name: mistral-finetuned-guided-meditations
11
+ results: []
12
+ ---
13
+
14
+ # mistral-finetuned-guided-meditations
15
+
16
+ This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) specifically trained for generating guided meditations. The fine-tuning was conducted on the "jhana-guided-meditations-collection" dataset available on Hugging Face, utilizing the QLoRA fine-tuning approach.
17
+
18
+ ## Model description
19
+
20
+ The model utilizes the LlamaTokenizer and is quantized for efficient load and execution. It is intended for generating mindful meditation scripts by understanding and generating contextually relevant content. This version has been optimized for better performance and lower resource utilization during inference.
21
+
22
+ ## Intended uses & limitations
23
+
24
+ This model is intended for generating text related to guided meditations. It may not perform well on unrelated tasks or general-purpose language understanding due to its specialized training.
25
+
26
+ ## Training and evaluation data
27
+
28
+ The model was trained on the "jhana-guided-meditations-collection" dataset, which consists of various guided meditation scripts. The data was preprocessed and tokenized using the LlamaTokenizer.
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ - Learning Rate: 0.0002
35
+ - Batch Size: 8 for training, 8 for evaluation
36
+ - Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
37
+ - Scheduler: Cosine learning rate scheduler
38
+ - Training Steps: 250
39
+ - Mixed Precision Training: Utilized Native AMP
40
+
41
+ ### Training results
42
+
43
+ Training resulted in a model capable of generating coherent and contextually relevant meditation scripts, improving upon the base model's capabilities in this specific domain.
44
+
45
+ ### Framework versions
46
+
47
+ - PEFT: 0.10.0
48
+ - Transformers: 4.40.0.dev0
49
+ - Pytorch: 2.2.2+cu121
50
+ - Datasets: 2.18.0
51
+ - Tokenizers: 0.15.2
52
+
53
+ ## Axolotl Fine-Tuning Details
54
+
55
+ The model was fine-tuned using the Axolotl toolkit, with specific emphasis on low-resource environments. Key aspects of the fine-tuning process include utilizing QLoRA for efficient learning and adapting to the guided meditation domain, employing mixed precision training for enhanced performance, and custom tokenization to fit the unique structure of meditation scripts. The entire process emphasizes resource efficiency and model effectiveness in generating serene and contextually appropriate meditation guides.