Update README.md
Browse files
README.md
CHANGED
@@ -6,20 +6,14 @@ library_name: transformers
|
|
6 |
tags:
|
7 |
- finetuned
|
8 |
- hqq
|
9 |
-
inference:
|
10 |
-
parameters:
|
11 |
-
temperature: 0.6
|
12 |
-
widget:
|
13 |
-
- messages:
|
14 |
-
- role: user
|
15 |
-
content: Co przedstawia polskie godło?
|
16 |
---
|
17 |
|
18 |
<p align="center">
|
19 |
<img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png">
|
20 |
</p>
|
21 |
|
22 |
-
# Bielik-7B-Instruct-v0.1
|
23 |
|
24 |
The Bielik-7B-Instruct-v0.1 is an instruct fine-tuned version of the [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1). Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-7B-Instruct-v0.1 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
|
25 |
|
|
|
6 |
tags:
|
7 |
- finetuned
|
8 |
- hqq
|
9 |
+
inference: false
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
<p align="center">
|
13 |
<img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png">
|
14 |
</p>
|
15 |
|
16 |
+
# Bielik-7B-Instruct-v0.1-3bit-HQQ
|
17 |
|
18 |
The Bielik-7B-Instruct-v0.1 is an instruct fine-tuned version of the [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1). Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-7B-Instruct-v0.1 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
|
19 |
|