simecek commited on
Commit
1884c58
1 Parent(s): 6a2dd93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -6,9 +6,13 @@ language:
6
  - cs
7
  ---
8
 
9
- This is a Mistral7B model fine-tuned with QLoRA on Czech Wikipedia data. The model is primarily designed for further fine-tuning for Czech-specific NLP tasks, including summarization and question answering. This adaptation allows for better performance in tasks that require an understanding of the Czech language and context.
10
 
11
- Example of usage:
 
 
 
 
12
 
13
  ```python
14
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
6
  - cs
7
  ---
8
 
9
+ This is a [Mistral7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned with 4bit-QLoRA on Czech Wikipedia data. The model is primarily designed for further fine-tuning for Czech-specific NLP tasks, including summarization and question answering. This adaptation allows for better performance in tasks that require an understanding of the Czech language and context.
10
 
11
+ For exact QLoRA parameters, see the Axolotl's [YAML file](cswiki-mistral7.yml).
12
+
13
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
14
+
15
+ **Example of usage:**:
16
 
17
  ```python
18
  from transformers import AutoModelForCausalLM, AutoTokenizer