File size: 1,764 Bytes
941eecf
 
 
 
 
 
 
 
 
 
 
a9a6711
 
cd94da2
 
 
 
 
941eecf
 
 
 
 
 
 
bca0eb0
941eecf
 
 
4048b69
 
 
941eecf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tinyllama-tarot-v1
  results: []
pipeline_tag: text2text-generation
widget:
- text: >-
    Give me a one paragraph tarot reading if I pull the cards Nine of Cups, King
    of Pentacles and Three of Cups.
datasets:
- barissglc/tarot
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# tinyllama-tarot-v1

This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) .

## Model description

This model is a language model capable of making predictions based on tarot cards. 
Trained to respond to questions related to topics such as love, career, and general life, tarot cards are the foundation of its predictions. 
The model can make predictions based on the selected tarot cards. You can access the tarot cards from the tarot dataset.

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP

### Training results



### Framework versions

- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2