Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,17 @@ datasets:
|
|
| 8 |
- stanford_alpaca
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
This repo contains the lora weights (8bit) for Falcon-40b
|
| 12 |
fit on the [Code Alpaca](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset.
|
| 13 |
|
|
|
|
| 8 |
- stanford_alpaca
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
---
|
| 11 |
+
|
| 12 |
+
<br><br>
|
| 13 |
+
|
| 14 |
+
<p align="center">
|
| 15 |
+
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
|
| 16 |
+
</p>
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
<p align="center">
|
| 20 |
+
<b>LLM Generation models trained by Jina AI, Finetuner team.</b>
|
| 21 |
+
|
| 22 |
This repo contains the lora weights (8bit) for Falcon-40b
|
| 23 |
fit on the [Code Alpaca](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset.
|
| 24 |
|