File size: 1,479 Bytes
4932fe5
 
 
 
 
 
6c331bd
 
4932fe5
83a2196
efb3463
7ebf0ce
efb3463
83a2196
efb3463
83a2196
efb3463
83a2196
efb3463
83a2196
 
 
 
efb3463
83a2196
 
 
 
efb3463
83a2196
efb3463
 
 
 
 
 
 
83a2196
 
efb3463
83a2196
efb3463
83a2196
 
efb3463
83a2196
efb3463
83a2196
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: mit
language:
- en
base_model:
- meta-llama/Llama-3.1-70B-Instruct
tags:
- axolotl
---
# 🎭 Cakrawala-70B

> *Where Worlds Converge and Adventures Begin!*

## 🌟 What's Special About This Model?

Cakrawala-70B is a fine-tuned variant of the Llama-3.1-70B-Instruct model, specifically optimised for generating rich roleplaying conversations and character interactions. The model has been trained to excel at producing detailed, contextually appropriate character dialogues with rich descriptions of physical actions, expressions, and emotional states while maintaining consistent character voices and perspectives throughout extended interactions. 

## 🧪 The Secret Sauce

### Training Diet:
- Fed with 5,867 conversation pairs
- Each conversation is a minimum 12-13 turns long
- Focused heavily details like facial expressions, environmental descriptions, and character reactions that are focused a lot on **keeping the model in character.**

### Tech Wizardry:
- Trained on the mighty Llama-3.1-70B-Instruct
- Fine-tuned using QLoRA 
- Trained over 3 epochs

## Training Parameters
- Gradient Accumulation Steps: 16
- Micro Batch Size: 4
- Learning Rate: 0.0003
- Optimizer: AdamW
- Scheduler: Cosine
- Mixed Precision: BF16 & FP16 with TF32 support

## 🔧 Under the Hood
- Trained on 8 x H100 NVL GPUs

## 🎬 License & Credits

- Licensed under MIT 
- Based on meta-llama/Llama-3.1-70B-Instruct 

---

*Built with ❤️ for roleplayers, by roleplayers*