|
--- |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
datasets: |
|
- yahma/alpaca-cleaned |
|
license: apache-2.0 |
|
--- |
|
|
|
<p><h1> speechless-mistral-moloras-7b </h1></p> |
|
|
|
# JUST for TEST! |
|
|
|
The router of mixture-of-multi-loras enables an automatic assembling of LoRA modules, using a gradientfree approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks. |
|
|
|
Totally 6 LoRA modules from [speechless-mistral-7b-dare-0.85](https://huggingface.co/speechlessai/speechless-mistral-7b-dare-0.85) |
|
|
|
|
|
Code: https://github.com/uukuguy/multi_loras?tab=readme-ov-file#mixture-of-multi-loras |
|
|
|
## LM-Evaluation-Harness |
|
|
|
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
|
|
| Metric | Value | |
|
| --- | --- | |
|
| ARC | | |
|
| HellaSwag | | |
|
| MMLU | | |
|
| TruthfulQA | | |
|
| Winogrande | | |
|
| GSM8K | | |
|
| Average | | |
|
|
|
|