File size: 1,630 Bytes
1fe49dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: gpl-3.0
datasets:
- CohereForAI/aya_dataset
language:
- pl
tags:
- lobotomy
---

**Polish-Lobotomy: An awful polish fine-tune**
============================================================


**Model Description**
---------------

This fine-tuned Phi-3 model is the first attempt at a Polish fine-tune of Phi-3. It is very bad, probably because of the fine-tuning method (making the model learn a new language probably needs a full fine-tune) and the small dataset.
- Ollama: [https://ollama.com/duckyblender/polish-lobotomy](https://ollama.com/duckyblender/polish-lobotomy)

**Training Details**
-----------------

* Trained on a single RTX 4060 for approximately 1 hour
* Utilized 8-bit QLORA for efficient training
* Despite the short training period, the model somehow managed to learn something (but not very well)

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317acd6212fce5a3cd793f6/KnxTL_Ww3aYUrJz8kZ5Sz.jpeg)

**Dataset**
------------

The model was trained on the Polish subset of the AYA dataset, which can be found at [https://huggingface.co/datasets/CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset).

**Prompt Template**
-----------------

The prompt template used for this model is identical to the Phi 3 template.

**Disclaimer**
--------------

**Please be advised that this model's output may contain nonsensical responses. Viewer discretion is strongly advised (but not really necessary).**

Use this model at your own risk, and please engage with the output responsibly (but let's be real, it's not like it's going to be useful for anything).